i am trying to implement the more command. I want to learn that how can I understand if there is a pipe. For example, if I type from the shell
cat file1 file2 | more
how can I handle that inside the implementation of more?
And is the implementation of more available as open source?
Actually i could not succeed from reading stdin.I've managed doing more file.txt but not cat file | more..
i think i should first read from user and put a buffer than print the buffer. my code contains:
if(argc == 1)
{
fgets(line, 255, 0);
printf("%s", line);
}
but it gives error.
The more syntax is
more [options] [file_name]
If you don't provide a file name, the more command gets input from stdin; you can provide this input (via stdin) using a pipe, for example:
cat file.txt | more
This sends the output of the cat command to more. This is the same as doing:
more file.txt
You don't need to specifically know if there is a pipe or not; you just need to check if a filename was passed as argument to more. If so, the input is considered to be the contents of the file. If not, the input is considered to originate from stdin.
As for the source code, some google searching will take you a long way. Here is some old source code from FreeBSD:
http://svnweb.freebsd.org/base/stable/2.0.5/usr.bin/more/
Or more recente source from the Ubuntu repositories:
http://bazaar.launchpad.net/~vcs-imports/util-linux-ng/trunk/files/head:/text-utils/
suggest u first check the argc whether equal 1, if equal 1 ,you shoule use the stdin as your inpout file handle, so your program can handle the situation as cat file.txt | more
Related
I've got a question here that I dont understand:
cat abc.dat | tee bcd.dat | tr ab ba > cde.dat
In this instance, I understand the translate part, but I’m a little confused as to what the pipe | does, I’ve been told that it takes the stdout of a program with the stdin of another. If I were to try to explain this, correct me if I’m wrong but you’re taking abc.dat’s contents, duplicating the output into bcd.dat, and then taking the content from bcd.dat and translating instances of a and b into b and a respectively, and then taking that and putting it into cde.dat?
The official answer is: abc.dat is copied to bcd.dat. and abc.dat is copied to cde.dat but with 'a’ replaced by 'b' and 'b‘ replaced by 'a'. But why is abc.dat copied into cde.dat instead of bcd.dat? does the pipe not continue?
The "official" answer is poorly worded. Neither tee nor tr know anything about abc.dat; it just happens that what it reads from tee is what tee read from cat, which is what cat read from abc.dat.
The tee utility reads from stdin and writes this firstly to stdout, and secondly to all files given as argument. In your case, stdin is written to stdout, and to file bcd.dat. The pipe behind the tee links the stdout of tee to the stdin of the tr. Although the data is the same, it is not the content of file bcd.dat that is being piped to tr. It is the output of the initial cat, which in turn is the content of file abc.dat.
I want to remove lots of temporary PS datasets with dataset name like MYTEST.**, but still can't find an easy way to handle the task.
I meant to use a Shell command below to remove them
cat "//'dataset.list'"| xargs -I '{}' tsocmd "delete '{}'"
However, first I have to save the dataset list into a PS dataset or Unix file. In Unix, we can redirect output of ls command into a text file: "ls MYTEST.* > dslist", but on TSO or ISPF panel, seems no simple command to do that.
Anyone has any clue on this? Your comment would be appreciated.
Rexx ISPF option is probably the easiest and can be used in the future, but options include:
Use the save command in ispf 3.4 to save to a file, then use a rexx program on the file created by the save command
listcat command, in particular
listcat lvl(MYTEST) ofile(ddname)
then write a rexx program to do the actual delete
Alternatively you can use the ISPF services LMDINIT, LMDLISTY & LMDFREE in a rexx program running under ISPF i.e.
/* Rexx ispf program to process datasets */
Address ispexec
"LMDINIT LISTID(lidv) LEVEL(MYTEST)"
"LMDLIST LISTID("lidv") OPTION(list) dataset(dsvar) stats(yes)"
do while rc = 0
/* Delete or whatever */
end
"LMDFREE LISTID("lidv")"
For all these methods you need to fully qualify the first High level qualifier.
Learning what Rexx / ISPF will serve you into the future. In the ISPF Editor, you can use the model command to get Templates / information for all the ISPF commands:
Command ====> Model LMDINIT
will add a template for the lmdinit command. There are templates for rexx, cobol, pl1, ISPF-panels, ISPF-skeletons messages etc.
Thanks Bruce for the comprehensive answer. According to Bruce's tips, I just worked out a one-line Shell command as below:
tsocmd "listcat lvl(MYTEST) " | grep -E "MYTEST(\..+)+" | cut -d' ' -f3 | xargs -I '{}' tsocmd "delete '{}'"
Above command works perfectly.
Update - The IDCAMS DELETE command has had the MASK operand for a while. You use it like:
DELETE 'MYTEST.**' MASK
Documentation for z/OS 2.1 is here.
I have the following line in a bash script:
find . -name "paramsFile.*" | xargs -n131072 cat > parameters.txt
I need to make sure the order the files are concatenated in does not change when I use this command. For example, if I run this command twice on the same set of paramsFile.*, parameters.txt should be the same both times. My question is, is this the case? And if it isn't, how can I make sure it is?
Thanks!
Edit: the same question goes for xargs: would that change how the files are fed to cat?
Edit2: as William Pursell pointed out, this question is actually about find. Does find always return files in the same order?
From description in man cat:
The cat utility reads files sequentially, writing them to the standard
output. The file operands are processed in command-line order.
If file is a single dash (`-') or absent, cat reads from the standard input. If file is a UNIX domain socket, cat connects to it
and
then reads it until EOF. This complements the UNIX domain binding capability available in inetd(8).
So yes as long as you pass the files to cat in the same order every time you'll be ok.
I have a java program in a UNIX environment which requires line buffered data to be passed into System.in.
Passing in keyboard input from the terminal is fine, however if I try to redirect the input from a file in a way such as:
java the_program < input.txt
the program will not execute properly.
In what ways can I have line buffered as opposed to block buffered data be passed into the program via stdin?
I have tried:
stdbuf -oL cat input.txt | java the_program
and
stdbuf -i0 java the_program < input.txt
as well as
grep --line-buffered . input.txt | java the_program
but have not had any luck.
Any ideas or suggestions?
Most of the problem is in the Java program - why/how does it need the input to be line-buffered? It should be designed to use the analogue of C's fgets() so that it just reads a line at a time. If there is no such analogue, then maybe you need to write a function/class that provides that service, taking whatever you can read in whatever units are provided, and either split or concatenate at line boundaries.
Failing that, you may have to indulge in unportable operations such as using the fstat() system call on the pipe file descriptor, only writing to the pipe when there is no data in it (looking at the st_size member). However, it is not guaranteed that it will work - unportable means it may not. Obviously, you'd ensure your program writes a line at a time and flushes the output if using standard I/O.
Is there a case of ... or context where cat file | ... behaves differently than ... <file?
When reading from a regular file, cat is in charge of reading the data, performs it as it pleases, and might constrain it in the way it writes it to the pipeline. Obviously, the contents themselves are preserved, but anything else could be tainted. For example: block size and data arrival timing. Additionally, the pipe in itself isn't always neutral: it serves as an additional buffer between the input and ....
Quick and easy way to make the block size issue apparent:
$ cat large-file | pv >/dev/null
5,44GB 0:00:14 [ 393MB/s] [ <=> ]
$ pv <large-file >/dev/null
5,44GB 0:00:03 [1,72GB/s] [=================================>] 100%
Besides the thing posted by other users, when using input redirection from a file, standard input is the file but when piping the output of cat to the input, standard input is a stream with the contents of the file. When standard input is the file will be able to seek within the file but the pipe will not allow it. You can see this by finding a zip file and running the following commands:
zipinfo /dev/stdin < thezipfile.zip
and
cat thezipfile.zip | zipinfo /dev/stdin
The first command will show the contents of the zipfile while the second will show an error, though it is a misleading error because zipinfo does not check the result of the seek call and errors later on.
A useless use of cat is always to be avoided. It's like driving with the handbrake on. It wastes CPU cycles for nothing, the OS constantly context switching between the cat process and the next in the pipe. If all the world's useless cats were gone and stopped being invented, reinvented, passed on from father to son, we wouldn't have global warming because we could easily live with 1.21 Gigawatts of power saved.
Thanks. I feel better now. Please join me in my crusade to stamp out useless use of cat on stackoverflow. This site is, as far as I perceive it, a major contribution to the proliferation of useless cats. I don't blame the newbies, but I do want to teach them. Workers and newbies of the world, loosen the handbrakes and save the planet!!!1!
cat will allow you to pipe multiple files in sequentially. Otherwise, < redirection and cat file | produce the same side effects.
Pipes cause a subshell to be invoked for the command on the right. This interferes with environment variables.
cat foo | while read line
do
...
done
echo "$line"
versus
while read line
do
...
done < foo
echo "$line"
One further difference is behavior on a blocking open() of the input file.
For example, assuming input is a FIFO with no writers, one invocation will not spawn any child programs until the input file is opened, while the other will spawn two processes:
prog ... < a_fifo # 'prog' not launched until shell can open file
cat a_fifo | prog ... # 'prog' and 'cat' are running (latter may block on open)
In practice this rarely matters except in contrived circumstances. prog might periodically log or do some cleanup work while waiting for input, for example, which you might want to happen even if no input is available. (Why wouldn't prog be sophisticated enough to open its own input fifo nonblocking?)
cat file | starts up another program (cat) that doesn't have to start in the second case. It also makes it more confusing if you want to use "here documents". But it should behave the same.