Run a job after another [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In unix/linux, I have a job ./a.out, which is currently running in the background over ssh as
nohup ./a.out &
And I have logged out of the original shell. I want another job ./b.out to start running only after ./a.out finishes. How to do it? The overall effect is equivalent to
./a.out && ./b.out
But I do not want to kill ./a.out.
Edit: clarify that ./a.out is running in the background using nohup. (Thanks to Marc B.)

One approach would be to see what the process ID of a.out is, with
top or ps, and have a launcher program that checks once a second to
see if that PID is still active. When it isn't, the launcher runs
b.out.

Related

How do I make rsync demon log to stdout (instead of a logfile)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I'm running rsync underneath Supervisor. I normally start rsync daemon like this:
rsync --daemon --config=/home/zs6ftad/deployments/cmot_rsync_daemon/rsyncd.conf --no-detach
I'd like to make it so that any log messages get echo'd to standard output instead of being stored in the log-file. Is there an option which will make an rsync server behave this way?
You can get rsyncd to log to stdout by setting the --log-file argument to /dev/stdout
rsync --daemon --no-detach --log-file=/dev/stdout

Linux - How to kill kibana process [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
How to kill/stop Kibana process ?
Answer: netstat -pln | grep 5601
then you can get the process id and kill -9 13304
If you have installed as service following command will work
service kibana stop
kill -9 `ps aux|grep -v grep|awk '{print $2}'`
is very helpful when you find a lot of Kibana processes. But be careful that it can kill other processes that contain "kibana" in the process name.

Apply new configuration to devstack from local.conf [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
If I set a couple of settings in my local.conf file in the /devstack folder for example:
ADMIN_PASSWORD=supersecret
DATABASE_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
and then run
./stack.sh
but then later want to append this file with some network configurations for example:
FLOATING_RANGE=192.168.1.224/27
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth0
will those new setting be applied to the cloud when I run ./unstack.sh and then ./stack.sh?
unstack script stops all your cloud environment
stack script will reconfigure and build cloud from scratch.
If you have problems there is clean.sh script, that destroys every thing that was created.
In your case enough:
./unstack.sh && ./stack.sh

Debian Wheezy Networking Spontaneously Shuts Down [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have just recently upgraded to Wheezy. Since I have updated my server will spontaneously kill networking. What logs could I look in to see if I can find the issue? I have looked in /var/log and there are no logs that look relevant that have been updated in the past few days. This server runs headless so re-enabling networking means turning the server off and on again as I can't ssh to it.
Any suggestions would be welcome.
Thanks
var/log/syslog should have something. You can run dmesg which may pick it up if it's a kernel module problem or something - to find the module name use lspci -v | grep -i ethernet and look for the module name a few lines later (it could be e1000 or something). Use the module name when grepping the dmesg output.

How to password protect a crontab in unix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about programming within the scope defined in the help center.
Improve this question
We need to password protect the activities on the crontab, for ex even if we try
crontab -l
or
crontab -e
or
crontab -r
We would have to enter a password to go to the next level(viewing/editing/deleting) even if we are root user.
Kindly suggest some mechanisms.
Thanks.
If you don't trust the root user on your system I would say you have big problems. I don't think there is any way to securely protect anything from root - by definition this user can do what they like, including removing any protection you put in place to try to enfore a password when executing crontab.

Resources