In our Systems Programming class, we were given the assignment of recreating a simple 'ls' style program.
I am near completion and needed some guidance on how to determine which functions to execute based off of which flags were passed in.
I am able to loop through the char* argv[] array to determine which flags were used, but with 4 different options, I'm stuck on trying to find an efficient way to call functions.
The flags can be:
-l for long listing
-a for exposing hidden files
-U for unsorted listing
-s for sorted listing
These can be passed in any order.
Any tips?
Thanks all
Have a series of flags indicating the options, then run through all the arguments setting each flag as appropriate. In pseudo-code, that would be something like:
flagLongListing = false
flagAllFiles = false
flagSorted = false
for each arg in args:
if arg starts with '-':
for each char in args[1:]:
if char is 'l': flagLongListing = true
if char is 'a': flagAllFiles = true
if char is 'U': flagSorted = false
if char is 's': flagSorted = true
and so on. This approach would handle all forms of option passing (combined, separate or a mixture):
ls -alU
ls -a -l -U
ls -al -U
Once you've finished processing the flags, you can go back and process the non-flags (like *.c if you're interested only in C files, for example).
But the output of each file would then be dictated by the flags that were set in the initial pass.
Related
I'm facing a problem while trying to pass a variable value to a grep command.
In essence, I want to grep out the lines which match my pattern and the pattern is stored in a variable. I take in the input from the user, and parse through myfile and see if the pattern exists(no problem here).
If it exists I want to display the lines which have the pattern i.e grep it out.
My code:
if {$a==1} {
puts "serial number exists"
exec grep $sn myfile } else {
puts "serial number does not exist"}
My input: SN02
My result when I run grep in Shell terminal( grep "SN02" myfile):
serial number exists
SN02 xyz rtw 345
SN02 gfs rew 786
My result when I try to execute grep in Tcl script:
serial number exists
The lines which match the pattern are not displayed.
Your (horrible IMO) indentation is not actually the problem. The problem is that exec does not automatically print the output of the exec'ed command*.
You want puts [exec grep $sn myfile]
This is because the exec command is designed to allow the output to be captured in a variable (like set output [exec some command])
* in an interactive tclsh session, as a convenience, the result of commands is printed. Not so in a non-interactive script.
To follow up on the "horrible" comment, your original code has no visual cues about where the "true" block ends and where the "else" block begins. Due to Tcl's word-oriented nature, it pretty well mandates the one true brace style indentation style.
Bash's builtin read command has a -i option, which specifies an initial input, which the user can accept as it is or edit or add to. I cannot find anything similar for Zsh's read command.
None of the options listed in the zshbuiltins man page seem relevant:
read [ -rszpqAclneE ] [ -t [ num ] ] [ -k [ num ] ] [ -d delim ]
[ -u n ] [ name[?prompt] ] [ name ... ]
Read one line and break it into fields using the characters in $IFS as separators, except as noted below. The first field is assigned to the first name, the second field to
the second name, etc., with leftover fields assigned to the last name. If name is omitted then REPLY is used for scalars and reply for arrays.
-r Raw mode: a `\' at the end of a line does not signify line continuation and backslashes in the line don't quote the following character and are not removed.
-s Don't echo back characters if reading from the terminal.
-q Read only one character from the terminal and set name to `y' if this character was `y' or `Y' and to `n' otherwise. With this flag set the return status is zero only
if the character was `y' or `Y'. This option may be used with a timeout (see -t); if the read times out, or encounters end of file, status 2 is returned. Input is
read from the terminal unless one of -u or -p is present. This option may also be used within zle widgets.
-k [ num ]
Read only one (or num) characters. All are assigned to the first name, without word splitting. This flag is ignored when -q is present. Input is read from the ter-
minal unless one of -u or -p is present. This option may also be used within zle widgets.
Note that despite the mnemonic `key' this option does read full characters, which may consist of multiple bytes if the option MULTIBYTE is set.
-z Read one entry from the editor buffer stack and assign it to the first name, without word splitting. Text is pushed onto the stack with `print -z' or with push-line
from the line editor (see zshzle(1)). This flag is ignored when the -k or -q flags are present.
-e
-E The input read is printed (echoed) to the standard output. If the -e flag is used, no input is assigned to the parameters.
-A The first name is taken as the name of an array and all words are assigned to it.
-c
-l These flags are allowed only if called inside a function used for completion (specified with the -K flag to compctl). If the -c flag is given, the words of the cur-
rent command are read. If the -l flag is given, the whole line is assigned as a scalar. If both flags are present, -l is used and -c is ignored.
-n Together with -c, the number of the word the cursor is on is read. With -l, the index of the character the cursor is on is read. Note that the command name is word
number 1, not word 0, and that when the cursor is at the end of the line, its character index is the length of the line plus one.
-u n Input is read from file descriptor n.
-p Input is read from the coprocess.
-d delim
Input is terminated by the first character of delim instead of by newline.
-t [ num ]
Test if input is available before attempting to read. If num is present, it must begin with a digit and will be evaluated to give a number of seconds, which may be a
floating point number; in this case the read times out if input is not available within this time. If num is not present, it is taken to be zero, so that read returns
immediately if no input is available. If no input is available, return status 1 and do not set any variables.
This option is not available when reading from the editor buffer with -z, when called from within completion with -c or -l, with -q which clears the input queue before
reading, or within zle where other mechanisms should be used to test for input.
Note that read does not attempt to alter the input processing mode. The default mode is canonical input, in which an entire line is read at a time, so usually `read
-t' will not read anything until an entire line has been typed. However, when reading from the terminal with -k input is processed one key at a time; in this case,
only availability of the first character is tested, so that e.g. `read -t -k 2' can still block on the second character. Use two instances of `read -t -k' if this is
not what is wanted.
If the first argument contains a `?', the remainder of this word is used as a prompt on standard error when the shell is interactive.
The value (exit status) of read is 1 when an end-of-file is encountered, or when -c or -l is present and the command is not called from a compctl function, or as described
for -q. Otherwise the value is 0.
The behavior of some combinations of the -k, -p, -q, -u and -z flags is undefined. Presently -q cancels all the others, -p cancels -u, -k cancels -z, and otherwise -z can-
cels both -p and -u.
The -c or -l flags cancel any and all of -kpquz.
How can I achieve the same goal in Zsh?
In zsh you could manually set the variable and use vared instead of read:
name=iconoclast
vared -p "Enter your name: " name
This is the equivalent of the following in bash:
read -p "Enter your name: " -e -i iconoclast name
i am trying to implement the more command. I want to learn that how can I understand if there is a pipe. For example, if I type from the shell
cat file1 file2 | more
how can I handle that inside the implementation of more?
And is the implementation of more available as open source?
Actually i could not succeed from reading stdin.I've managed doing more file.txt but not cat file | more..
i think i should first read from user and put a buffer than print the buffer. my code contains:
if(argc == 1)
{
fgets(line, 255, 0);
printf("%s", line);
}
but it gives error.
The more syntax is
more [options] [file_name]
If you don't provide a file name, the more command gets input from stdin; you can provide this input (via stdin) using a pipe, for example:
cat file.txt | more
This sends the output of the cat command to more. This is the same as doing:
more file.txt
You don't need to specifically know if there is a pipe or not; you just need to check if a filename was passed as argument to more. If so, the input is considered to be the contents of the file. If not, the input is considered to originate from stdin.
As for the source code, some google searching will take you a long way. Here is some old source code from FreeBSD:
http://svnweb.freebsd.org/base/stable/2.0.5/usr.bin/more/
Or more recente source from the Ubuntu repositories:
http://bazaar.launchpad.net/~vcs-imports/util-linux-ng/trunk/files/head:/text-utils/
suggest u first check the argc whether equal 1, if equal 1 ,you shoule use the stdin as your inpout file handle, so your program can handle the situation as cat file.txt | more
I have a C++ program which I need to run it multiple times.
For example:-
Run ./addTwoNumbers 50 times.
What would be a good approach to solve this problem?
In POSIX shells,
for i in {1..50} ; do ./addTwoNumbers ; done
If this is code you are writing, take the number of times you want to "run" as an argument:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
int numTimes = 1;
if (argc > 1)
{
numtimes = atoi(argv[1]);
}
for (int i = 0; i < numTimes; i++)
{
// Your code goes here
}
}
(Note this doesn't do any sanity checking on the input, but it should point you in the right direction)
The way you were asking the question indicated that you had a finished binary. You want to run it as if it was from the command line. The forward slash, to me, is a clue that you are a Unix like operating system user. Well, that, and the fact that this post is tagged "Unix", which I just saw after writing the below. It should all be applicable.
The scheme of using the shell is probably the simplest one.
man bash tells you how to write a shell script. Actually we need to figure out what shell you are using. From the command line, type:
echo $SHELL
The response I get is
/bin/bash
Meaning that I am running bash. Whatever you get, copy down, you will need it later.
The absolutely lowest knowledge base is to simply create a file with any standard text editor and no suffix. Call it, simply (for example) run50.
The first line is a special line that tells the unix system to use bash to run the command:
#! /bin/bash
(or whatever you got from echo $SHELL).
Now, in the file, on the next line, type the complete path, from root, to the executable.
Type the command just as if you were typing it on the command line. You may put any arguments to your program there as well. Save your file.
Do you want to run the program, and wait for it to finish, then start the next copy? Or do you want to start it 50 times as fast as you can without waiting for it to finish? If the former, you are done, if the latter, end the line with &
That tells the shell to start the program and to go on.
Now duplicate that line 50 times. Copy and paste, it is there twice, select all, and then paste at the end, for 4 times, again for 8, again for 16, and again for 32. Now copy 18 more lines and paste those at the end and you are done. If you happen to copy the line that says #! /bin/bash don't worry about it, it is a comment to the shell.
Save the file.
From the command line, enter the following command:
chmod +x ./filenameofmyshellcommand
Where you will replace filenameofmyshellcommand with the name of the file you just created.
Finally run the command:
./filenameofmyshellcommand
And it should run the program 15 times.
If you are using bash, instead of duplicating the line 50 times, you can write a loop:
for ((i=1;i<=50;i++)) do
echo "Invocation $i"
/complete/path/to/your/command
done
I have included a message that tells you which run the command is on. If you are timing the program I would not recommend a "feelgood" message like this. You can end the line with & if you want the command to be started and the script to continue.
The double parenthesis are required for this syntax, and you have to pay your syntax.
for ((i=1;i<=50;i++)) do echo "invocation $i" & done
is an interesting thing to just enter from the command line, for fun. It will start the 50 echos disconnected from the command line, and they often come out in a different order than 1 to 50.
In Unix, there is a system() library call that will invoke a command more or less as if from the terminal. You can use that call from C++ or from perl or about a zillion other programs. But this is the simplest thing you can do, and you can time your program this way. It is the common approach in Unix for running one program or a sequence of programs, or for doing common tasks by running a series of system tools.
If youy are going to use Unix, you should know how to write a simple shell script.
int count=0;
int main()
{
beginning:
//do whatever you need to do;
int count++;
if (count<=50);
{
goto beginning;
}
return 0;
}
I am running a script on a solaris Box. specifically SunOS 5.7. I am not root. I am trying to execute a script similar to the following:
newgrp thegroup <<
FOO
source .login_stuff
echo "hello world"
FOO
The Script runs. The problem is it returns back to the calling process which puts me in the old group with the source .login_stuff not being sourced. I understand this behavior. What I am looking for is a way to stay in the sub shell. Now I know I could put an xterm& (see below) in the script and that would do it, but having a new xterm is undesirable.
Passing your current pid as a parameter.
newgrp thegroup <<
FOO
source .login_stuff
xterm&
echo $1
kill -9 $1
FOO
I do not have sg available.
Also, newgrp is necessary.
The following works nicely; put the following bit at the top of the (Bourne or Bash) script:
### first become another group
group=admin
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
### now continue with rest of the script
This works fine on Linuxen. One caveat: arguments containing spaces are broken apart. I suggest you use the
env arg1='value 1' arg2='value 2' script.sh construct to pass them in (I couldn't get it to work with $# for some reason)
The newgrp command can only meaningfully be used from an interactive shell, AFAICT. In fact, I gave up on it about ... well, let's say long enough ago that the replacement I wrote is now eligible to vote in both the UK and the USA.
Note that newgrp is a special command 'built into' the shell. Strictly, it is a command that is external to the shell, but the shell has built-in knowledge about how to handle it. The shell actually exec's the program, so you get a new shell immediately afterwards. It is also a setuid root program. On Solaris, at least, newgrp also seems to ignore the SHELL environment variable.
I have a variety of programs that work around the issue that newgrp was intended to address. Remember, the command pre-dates the ability of users to belong to multiple groups at once (see the Version 7 Unix Manuals). Since newgrp does not provide a mechanism to execute commands after it executes, unlike su or sudo, I wrote a program newgid which, like newgrp, is a setuid root program and allows you to switch from one group to another. It is fairly simple - just main() plus a set of standardized error reporting functions used. Contact me (first dot last at gmail dot com) for the source. I also have a much more dangerous command called 'asroot' that allows me (but only me - under the default compilation) to tweak user and group lists much more thoroughly.
asroot: Configured for use by jleffler only
Usage: asroot [-hnpxzV] [<uid controls>] [<gid controls>] [-m umask] [--] command [arguments]
<uid controls> = [-u usr|-U uid] [-s euser|-S euid][-i user]
<gid controls> = [-C] [-g grp|-G gid] [-a grp][-A gid] [-r egrp|-R egid]
Use -h for more help
Option summary:
-a group Add auxilliary group (by name)
-A gid Add auxilliary group (by number)
-C Cancel all auxilliary groups
-g group Run with specified real GID (by name)
-G gid Run with specified real GID (by number)
-h Print this message and exit
-i Initialize UID and GIDs as if for user (by name or number)
-m umask Set umask to given value
-n Do not run program
-p Print privileges to be set
-r euser Run with specified effective UID (by name)
-R euid Run with specified effective UID (by number)
-s egroup Run with specified effective GID (by name)
-S egid Run with specified effective GID (by number)
-u user Run with specified real UID (by name)
-U uid Run with specified real UID (by number)
-V Print version and exit
-x Trace commands that are executed
-z Do not verify the UID/GID numbers
Mnemonic for effective UID/GID:
s is second letter of user;
r is second letter of group
(This program grew: were I redoing it from scratch, I would accept user ID or user name without requiring different option letters; ditto for group ID or group name.)
It can be tricky to get permission to install setuid root programs. There are some workarounds available now because of the multi-group facilities. One technique that may work is to set the setgid bit on the directories where you want the files created. This means that regardless of who creates the file, the file will belong to the group that owns the directory. This often achieves the effect you need - though I know of few people who consistently use this.
newgrp adm << ANYNAME
# You can do more lines than just this.
echo This is running as group \$(id -gn)
ANYNAME
..will output:
This is running as group adm
Be careful -- Make sure you escape the '$' with a slash. The interactions are a little strange, because it expands even single-quotes before it executes the shell as the other group. so, if your primary group is 'users', and the group you're trying to use is 'adm', then:
newgrp adm << END
# You can do more lines than just this.
echo 'This is running as group $(id -gn)'
END
..will output:
This is running as group users
..because 'id -gn' was run by the current shell, then sent to the one running as adm.
Anyways, I know this post is ancient, but hope this is useful to someone.
This example was expanded from plinjzaad's answer; it handles a command line which contains quoted parameters that contain spaces.
#!/bin/bash
group=wg-sierra-admin
if [ $(id -gn) != $group ]
then
# Construct an array which quotes all the command-line parameters.
arr=("${#/#/\"}")
arr=("${arr[*]/%/\"}")
exec sg $group "$0 ${arr[#]}"
fi
### now continue with rest of the script
# This is a simple test to show that it works.
echo "group: $(id -gn)"
# Show all command line parameters.
for i in $(seq 1 $#)
do
eval echo "$i:\${$i}"
done
I used this to demonstrate that it works.
% ./sg.test 'a b' 'c d e' f 'g h' 'i j k' 'l m' 'n o' p q r s t 'u v' 'w x y z'
group: wg-sierra-admin
1:a b
2:c d e
3:f
4:g h
5:i j k
6:l m
7:n o
8:p
9:q
10:r
11:s
12:t
13:u v
14:w x y z
Maybe
exec $SHELL
would do the trick?
You could use sh& (or whatever shell you want to use) instead of xterm&
Or you could also look into using an alias (if your shell supports this) so that you would stay in the context of the current shell.
In a script file eg tst.ksh:
#! /bin/ksh
/bin/ksh -c "newgrp thegroup"
At the command line:
>> groups fred
oldgroup
>> tst.ksh
>> groups fred
thegroup
sudo su - [user-name] -c exit;
Should do the trick :)