Using execlp after jailing a process - unix

Basically I want to execute a shell command inside a jailed process. When I try the below code(both as a normal user & root user), it produced no output
if(!(pid=fork)){
chroot("./jail_folder");
chdir("/");
execl("/bin/ls","ls",NULL);
}
I tried the perror() function and it gave me a "No such file or directory" error.
Is it possible to run a shell command in a jailed process? If so, how do we do that?

Yes, it is possible, but you have to make it accessible to the jail (typically, it means copying the desired program + all its libraries to the jail; symlinking wouldn't work, hardlinking is OK). Otherwise, it's no surprise that if you confine a program to part of the directory tree without /bin, you can't access /bin/ls.

Related

If i let a tail -f running on a file, does it prevent it from being deleted?

The operating system is AIX. I have done multiple tests by running tail -f commands on text files. Then from another terminal session i try to delete the tailed file. I have always been successful to delete them and no problem occurred but i did not find any factual documentation saying that tail -f does not lock or prevent a file from being deleted. So i would like to know if there is such a formal information and if the tail command may lock or prevent a file from being deleted how can i reproduce the use case ?
I suspect that the unlink() system call in AIX behaves similar enough to Linux that the first paragraph in this Linux man page adequately describes it:
unlink deletes a name from the filesystem. If that name was the last
link to a file and no processes have the file open the file is deleted
and the space it was using is made available for reuse.
When removing large log files that are being tailed (or written to), the disk space isn't free'd until all these processes close the file or terminate.
You can delete/move file while tail -f , but it will not create if deleted, have to create manually, hope this helps.

exec bash command in .profile file not letting Control-M job to run

I have an issue where my Control-M job is not able to execute anything on the unix box.
after investigation found out that .profile file in the unix server is the culprit.
content of the .profile file is
exec bash
I tried renaming the file and run the job in UAt and it did work where as I am not sure whats the implication of not having this file.
Can some one pls help me with explaining
what would be the overall impact if I rename the .profile file
how the content of .profile file being used in the server
I don't know what this "Control-M" thing is, but you should be able to safely remove that one-line .profile from your account with no problem. All it does is replace whatever command shell is assigned by default to your account with the command shell called bash. If you don't care if you use the default shell or bash, and especially if you are having problems with that .profile file, then just remove it.
If you really want to use bash then you might try changing your default shell with the chsh command. That may also cause problems for this "Control-M" thing, so you'll want to read the chsh manual page to be sure you know how to determine what your current shell is and how to change back to the original value if there are any problems.

What is the Unix way for a console script to use config files?

Let's imagine we have some script 'm12' (I've just invented this name) that runs
on Linux computers. If it is situated in your $PATH, you can easily run it
from the console like this:
m12
It will work with the default parameters. But you can customize the work of
this script by running it something like:
m12 --enable_feature --select=3
It is great and it will work. But I want to create a config file ~/.m12rc so I
will not need to specify --enable_feature --select=3 every time I run it.
It can be easily done.
The difficult part is starting here.
So, I have ~/.m12rc config file, but I what to start m12 without parameters that
are stored in that config file. What is the Unix way to do this? Should I run
script like this:
m12 --ignore_config
or there is better solution?
Next. Let's imagine I have a config file ~/.m12rc and I want some parameters from that
file, but want to change them a bit. How should I run the script and how the
script should work?
And the last question. Is it a good idea for script to first look for .m12rc
in the current directory, then in ~/ and then in /etc?
I'm asking all these questions because I what to implement config files in my
small script and I want to make the correct decisions about the design.
The book 'The Art of Unix Programming' by E S Raymond discusses such issues.
You can override the config file with --config-file=/dev/null.
You would normally use the order:
System-wide configuration (/etc/m12/m12rc, or just /etc/m12).
User's personal configuration (~/.m12rc)
Local directory configuration (./.m12rc)
Command-line options
with each later-listed item overriding earlier listed items. You should be able to specify the configuration file to read on the command line; arguably, that should be given precedence over other options. Think about --no-system-config or --no-user-config or --no-local-config. Many scripts do not warrant a system config file. Most scripts I've developed would not use both local config and user config. But that's the way my mind works.
The way I package standard options is to have a script in $HOME/bin (say m12a) that does it for me:
#!/bin/sh
exec m12 --enable_feature --select=3 "$#"
If I want those options, I run m12a. If I want some other options, I run raw m12 with the requisite options. I have multiple hundreds of files in my personal bin directory (about 500 on my main machine, a Mac; some of those are executables, but many are scripts).
Let me share my experience. I normally source config file at the beginning of the script. In the config file I also handle all the parameter switches:
DEFAULT_USER=blabla
while getopts ":u" do
case $opt in
u)
export APP_USER=$OPTARG
;;
esac
done
export APP_USER=${APP_USER-$DEFAULT_USER}
Then within the script I just use variables, this let me to have number of script having same input parameters.
In your case I imaging you would move "getopts" section to script and after it source the config file (if there was no switch to skip sourcing).
You should not put yours script config file to etc, it will require root privilidge to do that, and you simple can live with config file in home.
If you would like anyway to put your script for sharing with other users, it should go to /usr/share...
Another solution use thor (ruby gem), its way simpler to handle input parameter, avoiding work to get same result in bash e.g. getopts support only single letter switches.

Is there a way to wrap arbitary commands located under a subdirctory in a shell script

I have a bunch of customizations and would like to run my test program in a pristine environment.
Sure I could use a tiny shell script to wrap and pass of arguments but it would be cool and useful if I could invoke a pre and possibly post script only to commands located under certain sub directories. The shell I'm using is zsh.
I don't know what you include in your “pristine environment”.
If you want to isolate yourself from the whole system, then maybe chroot is what you're after. You can set up a complete new system, with its own /etc, /bin and so on, but sharing the kernel, networking and other non-filesystem stuff with your running system. Root's cooperation is required (the chroot system call is reserved to root).
If you want to isolate yourself from your dot files, run the program with a different value for the HOME environment variable:
HOME=~/test-environment /path/to/test-program
HOME=~/test-environment zsh
If this is specifically about zsh's configuration files, you can set the ZDOTDIR environment variable before starting it to tell zsh to run its own dot files from a directory other than $HOME (or zsh --no-rcs to not load any dot file).
If by pristine environment you mean a fully controlled set of environment variables, then the env program does this.
env -i PATH=$PATH HOME=$HOME program args
will run program args with only the environment variables you specified.

unix command line execute with . (dot) vs. without

At a unix command line, what's the difference between executing a program by simply typing it's name, vs. executing a program by typing a . (dot) followed by the program name? e.g.:
runme
vs.
. runme
. name sources the file called name into the current shell. So if a file contains this
A=hello
Then if you sources that, afterwards you can refer to a variable called A which will contain hello. But if you execute the file (given proper execution rights and #!/interpreterline), then such things won't work, since the variable and other things that script sets will only affects its subshell it is run in.
Sourcing a binary file will not make any sense: Shell wouldn't know how to interpret the binary stuff (remember it inserts the things appearing in that file into the current shell - much like the good old #include <file> mechanism in C). Example:
head -c 10 /dev/urandom > foo.sh; . foo.sh # don't do this at home!
bash: �ǻD$�/�: file or directory not found
Executing a binary file, however, does make a lot of sense, of course. So normally you want to just name the file you want to execute, and in special cases, like the A=hello case above, you want to source a file.
Using "source" or "." causes the commands to run in the current process. Running the script as an executable gives it its own process.
This matters most if you are trying to set environment variable in current shell (which you can't do in a separate process) or want to abort the script without aborting your shell (which you can only do in a separate process).
The first executes the command. The second is shorthand for including a shell script inside another.
This syntax is used to "load" and parse a script. It's most useful when you have a script that has common functionality to a bunch of other scripts, and you can just "dot include" it. See http://tldp.org/LDP/abs/html/internal.html for details (scroll down to the "dot" command).
Running "runme" will create a new process which will go on its merry little way and not affect your shell.
Running ". runme" will allow the script "runme" to change your environment variables, change directories, and all sorts of other things that you might want it to do for you. It can only do this because it's being interpreted by the shell process that's already running for you. As a consequence, if you're running bash as your login shell, you can only use the "." notation with a bash script, not (for example) a binary on C shell script.

Resources