How to Implement a single program in C that replicates the following Unix command(s): ps -ef | grep YOUR_USER_id | wc [duplicate] - unix

This question already has answers here:
Connecting n commands with pipes in a shell?
(2 answers)
Learning pipes, exec, fork, and trying to chain three processes together
(1 answer)
Closed 8 years ago.
My teacher gave us a practice assignment for studying in my Operating Systems class. The assignment was to pipe three processes together and implement the commands in the title all at once. We are only allowed to use these commands when implementing it:
dup2()
one of the exec()
fork()
pipe()
close()
I can pipe two together but I don't know how to do three. Could someone either show me how to do it or at least point me in the right direction?
Here is my code so far:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
int pfd[2];
int pfdb[2];
int pid;
if (pipe(pfd) == -1) {
perror("pipe failed");
exit(-1);
}
if ((pid = fork()) < 0) {
perror("fork failed");
exit(-2);
}
if (pid == 0) {
close(pfd[1]);
dup2(pfd[0], 0);
close(pfd[0]);
execlp("ps", "ps", "-ef", (char *) 0);
perror("ps failed");
exit(-3);
}
else {
close(pfd[0]);
dup2(pfd[1], 1);
close(pfd[1]);
execlp("grep", "grep", "darrowr", (char *) 0);
perror("grep failed");
exit(-4);
}
exit(0);
}
Any help would be appreciated. Heck a tutorial on how to complete it would be wondrous!

You're going to need 3 processes and 2 pipes to connect them together. You start with 1 process, so you are going to need 2 fork() calls, 2 pipe() calls, and 3 exec*() calls. You have to decide which of the processes the initial process will end up running; it is most likely either the ps or the wc. You can write the code either way, but decide before you start.
The middle process, the grep, is going to need a pipe for its input and a pipe for its output. You could create one pipe and one child process and have it run ps with its output going to a pipe; you then create another pipe and another child process and fix its pipes up before running grep; the original process would have both pipes open and would close most of the file descriptors before running wc.
The key thing with pipes is to make sure you close enough file descriptors. If you duplicate a pipe to standard input or standard output, you should almost always close both of the original file descriptors returned by the pipe() call; in your example, you should close both. And with two pipes, that means there are four descriptors to close.
Working code
Note the use of an error report and exit function; it simplifies error reporting enormously. I have a library of functions that do different error reports; this is a simple implementation of one of those functions. (It's overly simple: it doesn't include the program name in the messages.)
#define _XOPEN_SOURCE 700
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
static void err_syserr(const char *fmt, ...);
int main(void)
{
int p1[2];
int p2[2];
pid_t pid1;
pid_t pid2;
if (pipe(p1) == -1)
err_syserr("failed to create first pipe");
if ((pid1 = fork()) < 0)
err_syserr("failed to fork first time");
if (pid1 == 0)
{
dup2(p1[1], STDOUT_FILENO);
close(p1[0]);
close(p1[1]);
execlp("ps", "ps", "-ef", (char *)0);
err_syserr("failed to exec 'ps'");
}
if (pipe(p2) == -1)
err_syserr("failed to create second pipe");
if ((pid2 = fork()) < 0)
err_syserr("failed to fork second time");
if (pid2 == 0)
{
dup2(p1[0], STDIN_FILENO);
close(p1[0]);
close(p1[1]);
dup2(p2[1], STDOUT_FILENO);
close(p2[0]);
close(p2[1]);
execlp("grep", "grep", "root", (char *)0);
err_syserr("failed to exec 'grep'");
}
else
{
close(p1[0]);
close(p1[1]);
dup2(p2[0], STDIN_FILENO);
close(p2[0]);
close(p2[1]);
execlp("wc", "wc", (char *)0);
err_syserr("failed to exec 'wc'");
}
/*NOTREACHED*/
}
#include <stdarg.h>
#include <errno.h>
#include <string.h>
static void err_syserr(const char *fmt, ...)
{
int errnum = errno;
va_list args;
va_start(args, fmt);
vfprintf(stderr, fmt, args);
va_end(args);
if (errnum != 0)
fprintf(stderr, " (%d: %s)", errnum, strerror(errnum));
putc('\n', stderr);
exit(EXIT_FAILURE);
}
Sample output:
234 2053 18213
My machine is rather busy running root-owned programs, it seems.

Related

In-memory file to intercept stdout on function call

I've inherited this function that I have to call from my code. The function is
from a bizzare library in an arcane programming language -- so I cannot assume
almost anything about it, except for the fact that it prints some useful
infomation to stdout.
Let me simulate its effect with
void black_box(int n)
{
for(int i=0; i<n; i++) std::cout << "x";
std::cout << "\n";
}
I want to intercept and use the stuff it outputs. To that end I redirect stdout
to a temporary file, call the black_box, then restore the stdout and read the
stuff from the temporary file:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <iostream>
int main(void){
int fd = open( "outbuff", O_RDWR | O_TRUNC | O_CREAT, 0600);
// Redirect stdout to fd
int tmp = dup(1);
dup2( fd, 1);
// Execute
black_box(100);
std::cout << std::flush;
// Restore old stdout
dup2(tmp, 1);
// Read output from the outbuff flie
struct stat st;
fstat(fd, &st);
std::string buf;
buf.resize(st.st_size);
lseek(fd, 0, SEEK_SET);
read(fd, &buf[0], st.st_size);
close(fd);
std::cout << "Captured: " << buf << "\n";
return 0;
}
This works. But creating a file on disk for such a task is not something I'm
proud of. Can I make something like a file, but in-memory?
Before suggesting a pipe, please consider what would happen if
black_box overflows its buffer. And no, I need it single-threaded --
starting an extra process/thread defeats the whole purpose ot what I'm trying
to achieve.
I want to intercept and use the stuff it outputs.
[...] please consider what would happen if black_box overflows its buffer.
I see two alternatives.
If you know the maximum size of the output, and the size is not too excessive, use the socketpair instead of pipe. Unlike pipes, sockets allow to change the size of the egress/ingress buffers.
Use a temporary file on /tmp. In normal case it will not touch disk (unless system is swapping). There are few functions for the purpose, for example mkstemp (or tmpfile).

Difficulty in redirecting output in a dup2 and pipe code in Unix

I am new in unix. In the following code, I pass three arguments from the command line "~$ foo last sort more" in order to replicate "~$ last | sort | more". I am trying to create a program that will take three argument(at least 3 for now). The parent will fork three processes. The first process will write to the pipe. The second process will read and write to and from the pipe and the third process will read from the pipe and write to the stdout(terminal). First process will exec "last", second process will exec "sort" and third process will exec "more" and the processes will sleep for 1,2 and 3 secs in order to synchronize. I am pretty sure I am having trouble creating a pipe and redirecting the input and output. I don't get any output to the terminal but I can see that the processes have been created. I would appreciate some help.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/types.h>
#include <dirent.h>
#include <unistd.h>
#include <signal.h>
#include <fcntl.h>
#include <errno.h>
#define FOUND 1
#define NOT_FOUND 0
#define FIRST_CHILD 1
#define LAST_CHILD numargc
#define PATH_1 "/usr/bin/"
#define PATH_2 "/bin/"
#define DUP_READ() \
if (dup2(fdes[READ], fileno(stdin)) == -1) \
{ \
perror("dup error"); \
exit(4); \
}
#define DUP_WRITE() \
if (dup2(fdes[WRITE], fileno(stdout)) == -1) \
{ \
perror("dup error"); \
exit(4); \
}
#define CLOSE_FDES_READ() \
close(fdes[READ]);
#define CLOSE_FDES_WRITE() \
close(fdes[WRITE]);
#define EXEC(x, y) \
if (execl(arraycmds[x], argv[y], (char*)NULL) == -1) \
{ \
perror("EXEC ERROR"); \
exit(5); \
}
#define PRINT \
printf("FD IN:%d\n", fileno(stdin)); \
printf("FD OUT:%d\n", fileno(stdout));
enum
{
READ, /* 0 */
WRITE,
MAX
};
int cmdfinder( char* cmd, char* path); /* 1 -> found, 0 -> not found */
int main (int argc, char* argv[])
{
int numargc=argc-1;
char arraycmds[numargc][150];
int i=1, m=0, sleeptimes=5, numfork;
int rc=NOT_FOUND;
pid_t pid;
int fdes[2];
if(pipe(fdes) == -1)
{
perror("PIPE ERROR");
exit(4);
}
while(i <= numargc)
{
memset(arraycmds[m], 0, 150);
rc=cmdfinder(argv[i], arraycmds[m]);
if (rc)
{
printf("Command found:%s\n", arraycmds[m]);
}
i++;
m++;
}
i=0; //array index
numfork=1; //fork number
while(numfork <= numargc)
{
if ((pid=fork()) == -1)
{
perror("FORK ERROR");
exit(3);
}
else if (pid == 0)
{
/* Child */
sleep(sleeptimes);
if (numfork == FIRST_CHILD)
{
DUP_WRITE();
EXEC(i, numfork);
}
else if (numfork == LAST_CHILD)
{
DUP_READ();
CLOSE_FDES_WRITE();
EXEC(i, numfork);
}
else
{
DUP_READ();
DUP_WRITE();
CLOSE_FDES_READ();
CLOSE_FDES_WRITE();
EXEC(i, numfork);
}
}
else
{
/* Parent */
printf("pid:%d\n", pid);
i++;
numfork++;
sleeptimes++;
}
}
PRINT;
printf("i:%d\n", i);
printf("numfork:%d\n", numfork);
printf("DONE\n");
return 0;
}
int cmdfinder(char* cmd, char* path)
{
DIR* dir;
struct dirent *direntry;
char *pathdir;
int searchtimes=2;
while (searchtimes)
{
pathdir = (char*)malloc(250);
memset(pathdir, 0, 250);
if (searchtimes==2)
{
pathdir=PATH_1;
}
else
{
pathdir=PATH_2;
}
if ((dir = opendir(pathdir)) == NULL)
{
perror("Directory not found");
exit (1);
}
else
{
while (direntry = readdir(dir))
{
if (strncmp( direntry->d_name, cmd, strlen(cmd)) == 0)
{
strcat(path, pathdir);
strcat(path, cmd);
//searchtimes--;
return FOUND;
}
}
}
closedir(dir);
searchtimes--;
}
printf("%s: Not Found\n", cmd);
return NOT_FOUND;
}
All your macros are making this harder to read than if you just wrote it straight. Especially when they refer to local variables. To find out what's going on with EXEC my eyes have to jump up from where it's used to where it's defined, find out which local arrays it uses, then jump back down to see how that access fits in the flow of main. It's a maze of macros.
And wow, cmdfinder? Your very own $PATH lookup, only it's hardcoded /usr/bin:/bin? And double wow, readdir, just to find out if a file exists whose name is already decided? Just stat it! Or don't do anything, just exec it and handle the ENOENT by trying the next one. Or use execlp that's what it's there for!
On to the main point... you don't have enough pipes, and you're not closing all the unused descriptors.
last | sort | more is a pipeline of 3 commands connected by 2 pipes. You can't do it with one pipe. The first command should write into the first pipe, the middle command should read the first pipe and write to the second pipe, and the last command should read the second pipe.
You could create both pipes first, then do all the forks, which makes things simple to follow, but requires a lot of closes in every child process since they'll all inherit all the pipe fds. Or you can use a more sophisticated loop, creating each pipe just before forking the first process that will use it, and closing each descriptor in the parent as soon as the relevant child process has been created. I'd hate to see how many macros you'd use for that.
Every successful dup should be followed by a close of the descriptor that was copied. dup is short for "duplicate", not "move". After it's done, you have an extra descriptor left over, so don't just dup2(fdes[1], fileno(stdout) - also close(fdes[1]) afterward. (To be perfectly robust you should check whether fdes[1]==fileno(stdout) already, and in that case skip the dup2 and close.)
FOLLOWUP QUESTIONS
You can't use one pipe for 3 processes because there would be no way to distinguish which data should go to which destination. When the first process writes to the pipe, while both of the other processes are trying to read from it, one of them will get the data but you won't be able to predict which one. You need the middle process to read what the first process writes, and the last process to read what the middle process writes.
You're halfway right about file descriptors being shared after a fork. The actual pipe object is shared. That's what makes the whole system work. But the file descriptors - the endpoints designated by small integers like 1 for standard output, 0 for standard input, and so on - are not coupled the way you suggest. The same pipe object may be associated with the same file descriptor number in two processes, the associations are independent. Closing fd 1 in one process does not cause fd 1 to become closed in any other process, even if they are related.
Sharing of the fd table, so that a close in one task has an effect in another task, is part of the "pthread" feature set, not the "fork" feature set.

Need help in IPC through Pipes

I am Working On a lab.
A father process will create two son processes A and B.
Son A will send some string to son B through pipe.son B will Invert the String case of the String Got from Son A and will send back the Inverted string to son A.after receiving the inverted string son A will print it to the screen.
here is the code.
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <ctype.h>
void process_A(int input_pipe[], int output_pipe[])
{
int c;
char ch;
int rc;
close(input_pipe[1]);
close(output_pipe[0]);
while ((c = getchar()) > 0) {
ch = (char)c;
rc = write(output_pipe[1], &ch, 1);
if (rc == -1) {
perror("A_TO_B: write");
close(input_pipe[0]);
close(output_pipe[1]);
exit(1);
}
rc = read(input_pipe[0], &ch, 1);
c = (int)ch;
if (rc <= 0) {
perror("A_TO_B: read");
close(input_pipe[0]);
close(output_pipe[1]);
exit(1);
}
putchar(c);
}
close(input_pipe[0]);
close(output_pipe[1]);
exit(0);
}
void process_B(int input_pipe[], int output_pipe[])
{
int c;
char ch;
int rc;
close(input_pipe[1]);
close(output_pipe[0]);
while (read(input_pipe[0], &ch, 1) > 0) {
c = (int)ch;
if (isascii(c) && isupper(c))
c = tolower(c);
else if (isascii(c) && islower(c))
c = toupper(c);
ch = (char)c;
rc = write(output_pipe[1], &ch, 1);
if (rc == -1) {
perror("B_TO_A: write");
close(input_pipe[0]);
close(output_pipe[1]);
exit(1);
}
}
close(input_pipe[0]);
close(output_pipe[1]);
exit(0);
}
int main(int argc, char* argv[])
{
/* 2 arrays to contain file descriptors, for two pipes. */
int A_TO_B[2];
int B_TO_A[2];
int pid;
int rc,i,State;
/* first, create one pipe. */
rc = pipe(A_TO_B);
if (rc == -1) {
perror("main: pipe A_TO_B");
exit(1);
}
/* create another pipe. */
rc = pipe(B_TO_A);
if (rc == -1) {
perror("main: pipe B_TO_A");
exit(1);
}
for(i=0;i<2;i++)
{
if((pid=fork()) <0){perror("fork failed\n");};
if((i==0) && (pid ==0))
{
process_A(A_TO_B, B_TO_A);
}
else if((i==1)&&(pid==0))
{
process_B(B_TO_A, A_TO_B);
}
else if(pid>0)
{
wait( &State );
}
}
return 0;
}
the problem is When i run the program the Son B gets Block.
I need u guys help.
Thanks in advance.
OK, diagram:
initially: parent process: has
B_TO_A[0] and [1] open,
has A_TO_B[0] and [1] open
fork (makes copy)
parent: child (pid==0):
B_TO_A both open, A_TO_B both open call process_A: close unwanted pipe ends, loop
call wait(), wait for one child loop reads stdin, writes one pipe, reads other pipe
if we ever get here:
fork (makes copy)
parent: child (pid==0):
B_TO_A both open, A_TO_B both open call process_B: close unwanted pipe ends, loop
parent: both ends of both pipes open
call wait(), wait for one child loop reads one pipe, writes other pipe
First, you will usually not get to "if we ever get here" because the child running process_A() runs in a loop until either EOF on stdin (if that occurs first) or one of the pipe read/write calls fails (e.g., due to EOF on input_pipe[0]). Since the parent is still waiting in a wait() call, and has both ends of both pipes open, there's no EOF on the pipe (EOF on a pipe occurs after you read all the data written by all writers, and all dups of the write end have been closed). So the only way to get there is to hit EOF on stdin, so that the while loop does not run.
Second, if you do get around to forking again and doing process_B(), that child will also wait forever, because one write end of the pipe it's reading from is still open... in the parent! The parent won't close it, because the parent will be waiting forever in wait.
In general, what you need to do here is:
create two pipes (like you do now)
fork once, and run process_A() in the child
fork again (in the parent), and run process_B() in the (new) child
close both ends of both pipes (in the parent)
wait for both children now, after both have gotten started
The error handling gets a bit messy since you have to do something (such as kill() the first child) if you can't start the second child. So you need to know how far along you have gotten. You can still loop to fork twice but you can't wait inside the loop, and with just two trips around the loop, each of which do rather different steps, you might as well just write it all out without a loop.

How can I get the current mouse (pointer) position co-ordinates in X

This can either be some sample C code or a utility that will show me either gui or on the console it doesn't matter, but I have to be able to "command" it to grab the co-ordinates at an exact time which makes xev not very useful (that I could figure out).
I'm not a C programmer by any means but I looked at a couple of online tutorials and think this is how you are supposed to read the current mouse position. This is my own code and I'd done nothing with Xlib before so it could be completely broken (for example, the error handler shouldn't just do nothing for every error) but it works. So here is another solution:
#include <X11/Xlib.h>
#include <assert.h>
#include <unistd.h>
#include <stdio.h>
#include <malloc.h>
static int _XlibErrorHandler(Display *display, XErrorEvent *event) {
fprintf(stderr, "An error occured detecting the mouse position\n");
return True;
}
int main(void) {
int number_of_screens;
int i;
Bool result;
Window *root_windows;
Window window_returned;
int root_x, root_y;
int win_x, win_y;
unsigned int mask_return;
Display *display = XOpenDisplay(NULL);
assert(display);
XSetErrorHandler(_XlibErrorHandler);
number_of_screens = XScreenCount(display);
fprintf(stderr, "There are %d screens available in this X session\n", number_of_screens);
root_windows = malloc(sizeof(Window) * number_of_screens);
for (i = 0; i < number_of_screens; i++) {
root_windows[i] = XRootWindow(display, i);
}
for (i = 0; i < number_of_screens; i++) {
result = XQueryPointer(display, root_windows[i], &window_returned,
&window_returned, &root_x, &root_y, &win_x, &win_y,
&mask_return);
if (result == True) {
break;
}
}
if (result != True) {
fprintf(stderr, "No mouse found.\n");
return -1;
}
printf("Mouse is at (%d,%d)\n", root_x, root_y);
free(root_windows);
XCloseDisplay(display);
return 0;
}
xdotool might be the best tool for this.
For C, you can use libxdo.
Actually, xev is very useful if you supply it with the window id grabbed using xwininfo, then it can easily perform this task for you. There are no doubt much more elegant solutions but it works.
xinput can be used to print the full device state of any input device.
First you need to discover your device id:
$ xinput --list | grep -i mouse
⎜ ↳ Logitech USB Receiver Mouse id=11 [slave pointer (2)]
then you can ask for state:
$ xinput --query-state 11;
2 classes :
ButtonClass
button[1]=up
button[2]=up
button[3]=up
button[4]=up
button[5]=up
button[6]=up
button[7]=up
button[8]=up
button[9]=up
button[10]=up
button[11]=up
button[12]=up
button[13]=up
button[14]=up
button[15]=up
button[16]=up
button[17]=up
button[18]=up
button[19]=up
button[20]=up
ValuatorClass Mode=Relative Proximity=In
valuator[0]=274
valuator[1]=886
valuator[2]=0
valuator[3]=675
Or just a loop:
while sleep .2; do xinput --query-state $(xinput --list | grep -i mouse | cut -d= -f2 | cut -f1| head -1); done

A doubt on pipes in Unix

This code below is for executing ls -l | wc -l.
In the code, if I comment close(p[1]) in parent then the program just hangs, waiting for some input. Why it is so? The child writes output of ls on p1 and parent should have taken that output from p0.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <unistd.h>
main ()
{
int i;
int p[2];
pid_t ret;
pipe (p);
ret = fork ();
if (ret == 0)
{
close (1);
dup (p[1]);
close (p[0]);
execlp ("ls", "ls", "-l", (char *) 0);
}
if (ret > 0)
{
close (0);
dup (p[0]);
//Doubt, Commenting the line below does not work WHy?
close (p[1]);
wait (NULL);
execlp ("wc", "wc", "-l", (char *) 0);
}
}
pipe + fork creates 4 file descriptors, two are inputs
Before the fork you have a single pipe with one input and one output.
After the fork you will have a single pipe with two inputs and two outputs.
If you have two inputs for the pipe (that a proc writes to) and two outputs (that a proc reads from), you need to close the other input or the reader will also have a pipe input which never gets closed.
In your case the parent is the reader, and in addition to the output end of the pipe, it has an open other end, or input end, of the pipe that stuff could, in theory, be written to. As a result, the pipe never sends an eof, because when the child exits the pipe is still open due to the parent's unused fd.
So the parent deadlocks, waiting forever for it to write to itself.
Note that 'dup(p[1])' means you have two file descriptors pointing to the same file. It does not close p[1]; you should do that explicitly. Likewise with 'dup(p[0])'. Note that a file descriptor reading from a pipe only returns zero bytes (EOF) when there are no open write file descriptors for the pipe; until the last write descriptor is closed, the reading process will hang indefinitely. If you dup() the write end, there are two open file descriptors to the write end, and both must be closed before the reading process gets EOF.
You also do not need or want the wait() call in your code. If the ls listing is bigger than a pipe can hold, your processes will deadlock, with the child waiting for ls to complete and ls waiting for the child to get on with reading the data it has written.
When the redundant material is stripped out, the working code becomes:
#include <unistd.h>
int main(void)
{
int p[2];
pid_t ret;
pipe(p);
ret = fork();
if (ret == 0)
{
close(1);
dup(p[1]);
close(p[0]);
close(p[1]);
execlp("ls", "ls", "-l", (char *) 0);
}
else if (ret > 0)
{
close(0);
dup(p[0]);
close(p[0]);
close(p[1]);
execlp("wc", "wc", "-l", (char *) 0);
}
return(-1);
}
On Solaris 10, this compiles without warning with:
Black JL: gcc -Wall -Werror -Wmissing-prototypes -Wstrict-prototypes -o x x.c
Black JL: ./x
77
Black JL:
If the child doesn't close p[1], then that FD is open in two processes -- parent and child. The parent eventually closes it, but the child never does -- so the FD stays open. Therefore any reader of that FD (the child in this case) is going to wait forever just in case more writing it's gonna be done on it... it ain't, but the reader just doesn't KNOW!-)

Resources