execl behavior in unix - unix

In my program I am executing long lived tail (with -f) via execl.
Anything after that call to execl does not get executed.
Do I need to call/execute this tail in background so that I can do other things in my program?
I usually exit out of my program by ctrl C.

execl() will replace the calling process, so your calling program won't exist anymore once you've called it.
To get around this, you could call execl() after a call to fork(). Fork splits your program in two (a parent and a child), and you'll be able to check which is the child process and which is the parent. This example explains how to use fork - in the second example there, you'd put your execl() call inside the child process section.

The man page for the exec family of calls starts with:
The exec family of functions replaces
the current process image with a
new process image.
Not entirely sure what you want to accomplish, but it looks like exec isn't the solution. If you want your first program to remain alive, you'll need to fork. Does your initial program do something with the output of tail -f?

If your parent program would like to capture the output from tail, you should look a the popen() function. This will start the tail process and the output can be read from the FILE* it returns.
If your parent program has no interest in capturing the output then you'll want to create a child process using fork() which then calls execl(). The child process image will be replaced by tail and your parent process will continue.

Related

Awesome WM - os.execute vs afwul.spawn

I was wondering if there are any difference between lua's
os.execute('<command>')
and awesome's
awful.spawn('<command>')
I noticed that lain suggests to use os.execute, is there any reason for that, or it's a matter of personal taste?
Do not ever use os.execute or io.popen. They are blocking functions and cause very bad performance and noticeable increase in input (mouse/keyboard) and client repaint latency. Using these functions is an anti-pattern and will only make everything worst.
See the warning in the documentation https://awesomewm.org/apidoc/libraries/awful.spawn.html
Now to really answer the question, you have to understand that Awesome is single threaded. It means it only does a single thing at a time. If you call os.execute("sleep 10"), your mouse will keep moving but your computer will otherwise be frozen for 10 seconds. For commands that execute fast enough, it may be possible to miss this, but keep in mind that due to the latency/frequency rules, any command that takes 33 millisecond to execute will drop up to 2 frames in a 60fps video game (and will at least drop one). If you have many commands per second, they add up and wreck your system performance.
But Awesome isn't doomed to be slow. It may not have multiple threads, but it has something called coroutine and also have callbacks. This means things that take time to execute can still do so in the background (using C threads or external process + sockets). When they are done, they can notify the Awesome thread. This works perfectly and avoid blocking Awesome.
Now this brings us to the next part. If I do:
-- Do **NOT** do this.
os.execute("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
The label will display foo. But If I do:
-- Assumes /tmp/foo.txt does not exist
awful.spawn.with_shell("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
Then the label will be empty. awful.spawn and awful.spawn.with_shell will not block and thus the io.popen will be execute wayyy before sleep 1 finishes. This is why we have async functions to execute shell commands. There is many variants with difference characteristics and complexity. awful.spawn.easy_async is the most common as it is good enough for the general "I want to execute a command and do something with the output when it finishes".
awful.spawn.easy_async_with_shell("sleep 1; echo foo > /tmp/foo.txt", function()
awful.spawn.easy_async_with_shell("cat /tmp/foo.txt", function(out)
mylabel.text = out
end)
end)
In this variant, Awesome will not block. Again, like other spawn, you cannot add code outside of the callback function to use the result of the command. The code will be executed before the command is finished so the result wont yet be available.
There is also something called coroutine that remove the need for callbacks, but it is currently hard to use within Awesome and would also be confusing to explain.

If you fork() and exec() from the child process, and wait in the parent, how does the parent get a return code from the child?

I'm learning about fork(), exec(), etc. and I ran into something in a textbook that I don't fully understand.
In the example, a process calls fork().
In the child process, we call exec().
Later, in the parent, we call wait().
It is my understanding that a successful exec() call never returns. If we called exec() in the child, how can we wait for the child to return in the parent, if the child will never have control returned to it from the exec()?
My only guess here is that what happens is the parent, thinking it's waiting on the child, is actually waiting on the new process created with exec? I.e. normally I'd fork() and wait for the child. If I fork() and exec the UNIX program date then wait for the child in the parent, am I actually now waiting for date to exit?
Thanks!
You need to distinguish the process from the program. Calling exec runs a different program in the same process. The exec function doesn't return (except to signal an error) because it terminates the program that calls in. However, the process is reused to run a different program. In a way, from the perspective of the process running exec, the exec function returns as the entry point of the new program.
From the point of view of the parent, there's a child process. That's all the parent knows. The parent doesn't know that the child called exec, unless it watches it and finds out by indirect means such as running ps. The parent is just waiting for the child process to exit, no matter what program the child process happens to be running.

Is unix pipe limited to use only between 2 processes?

I am reading about the pipes in UNIX for inter process communication between 2 processes. I have following questions.
1) Is Unix pipe limited to use only between 2 processes or can we use 3 or more related processes to communicate using a single pipe? For example, if I have one parent & 2 child processes, Can I use the pipe to write from the parent process and can I read using the same pipe from both children? If that is the case how both children will get same contents because
if one child reads from the pipe, the data will be removed from the pipe by kernel?
2) Is it really necessary to close the unused end of the pipe? for example, if my parent process is writing data in to the pipe and child is reading from pipe, is it really necessary to close the read end of the pipe in parent process and close the write end from child process? Are there any side effects if I won't close those ends? Why do we need to close those ends?
A single pipe is not a natural solution to allowing a parent broadcast to its children. Shared memory would provide a more natural method to solve that problem. There is only one message that is naturally broadcast from the parent to the children: the parent can close the write end of the pipe and cause all the children to see a read return 0 on the read end of the pipe.
However, a single pipe can be used by children to relay information back to the parent. So long as the messages are properly framed with source information from the child, the parent can field responses from all its children from the read end of the pipe, while each child writes to the write end of the pipe.

Implementation of unix terminal background process.

I'm writing my own shell in c, and problem is in implementation of background process.
Now on BASH whenever we execute a process ending with '&' then that process goes in background
and start executing, Output of background process comes on terminal and when background process needs input then it is suspended until we give "fg" command.
So how to implement background process?
For any normal execution of commands (not ending with &) I call fork system call, and then in child process I execute the command, parent will wait for execution of child process(by wait()).
And for commands having '&' I done same thing but parent will not wait for execution of child process. Here my problem is whenever background process needs input it take control over terminal. so how to suspend child process when it needs input.
To detach the process from the parent, you need to use setsid() on the children process, it will run the program in new session
sid = setsid();
See also http://www.netzmafia.de/skripten/unix/linux-daemon-howto.html

AutoIt: Run next command after finishing the previous one

I'm currently writing a macro that performs a series of control sends and control clicks.
They must be done in the exact order.
At first I didn't have any sleep statements, so the script would just go through each command regardless whether the previous has finished or not (ie: click SUBMIT before finish sending the input string)
So I thought maybe I'll just put some sleep statements, but then I have to figure out how best to optimize it, AND I have to consider whether others' computers' speeds because a slow computer would need to have longer delays between commands. That would be impossible to optimize for everyone.
I was hoping there was a way to force each line to be run only after the previous has finished?
EDIT: To be more specific, I want the controlsend command to finish executing before I click the buttons.
Instead of ControlSend, use ControlSetText. This is immediate (like GuiEdit).
My solution: use functions from the user-defined library "GuiEdit" to directly set the value of the textbox. It appears to be immediate, thus allowing me to avoid having to wait for the keystrokes to be sent.

Resources