How to start WSO2 in back ground in Unix - wso2-api-manager

How to start WSO2 API Manager in background? I start it by
$ sh wso2server.sh
How to make it run in background so that it wont be stopped when I exit my putty terminal?

You can also use the nohup command to run the script as a background process.
nohup ./wso2server.sh &
It would write all the logs you get on terminal to a file nohup.out in the same directory where you run your wso2server.sh script file.

You can run the command ./wso2server.sh --start or ./wso2server.sh --stop

You can use ./wso2server.sh start > /dev/null &

Following will help you, It's similar to any other Linux scripts.
sh wso2server.sh start > /dev/null &
Additionally, you can start any WSO2 products as Linux services. Please refer documentation.

Related

How to run jupyter notebook in the background ? No need to keep one terminal for it

Often we run jupyter notebook to pop up a page in browser to use notebook. However, the terminal opening the server remains there. Is there a way that we can close that terminal with server running in the back ?
You can put the process into the background by using jupyter notebook --no-browser & disown. You can close the terminal afterwards and the process will still be running.
If you're using zsh you can also use a shorter version that does the same: jupyter notebook --no-browser &!.
To kill the process you can use pgrep jupyter to find the PID of the process and then kill 1234, replacing 1234 with the PID you just found.
Explanation
The --no-browser flag makes jupyter not open the browser automatically, it also works without this flag.
The & puts it into the background of the currently running shell.
The disown then removes the job from the background of the currently running shell and makes it run independently of the shell so that you may close it.
In the zsh version the &! is a built-in function that does the same as & disown.
Tmux is a good option available to run your Jupyter Notebook in the background.
I am running my Jupyter Notebook on a google cloud platform's VM instance with OS: Ubuntu 16.0. Where
I have to start SSH terminal
Then run the command: jupyter-notebook --no-browser --port=5000. It will start the Jupyter Notebook on port number 5000 on that VM instance
Then I open my browser and typer ip_addrees_of_my_instance:port_number which is 5000. It will open my Notebook in the browser.
Now up to this, all is good. But wait if the connection with my SSH terminal is terminated then the Jupyter Notebook stops immediately and hence I have to re-run it once again once the ssh terminal is restarted or from new ssh terminal.
To avoid this tmux is very good option.
Terminal Multiplexer (tmux) to Keep SSH Sessions Running in the background after ssh terminal is closed:
Start the ssh terminal
Type tmux. It will open a window in the same terminal.
Give command to start Jupyter Notebook here. Open Notebook.
Now if SSH terminal is closed/terminated it will keep running your notebook on the instance.
If the connection terminated then:
reconnect or open new ssh terminal. To see this Jupyter Server(which is kept running in the background) type: tmux attach command.
(Edited: changed "notebook" to "Jupyter Server")
Want to terminate tmux session:
Close the notebook. Then type exit in tmux-terminal-window.
(update: If we close the notebook and use tmux detach command: it will exit from tmux session window/terminal without terminating/stopping the tmux sessions)
For more details please refer to this article: https://www.tecmint.com/keep-remote-ssh-sessions-running-after-disconnection/
Under *nix, the best way to run a program avoiding to be terminated by closing the terminal is to use nohup (no Hang up).
To start browser after running the server use the command:
nohup jupyter notebook &
And to start the server without opening the browser use the command:
nohup jupyter notebook --no-browser &
Note that you can shut down the jupyter server by using Quit in the upper right of the page of jupyter.
nohup puts as a parent of the process init(0), so it will not receive the "Hang Up" signal when the terminal is closed. All the output (standard output and standard error) are redirected to the file nohup.out
nohup exists both as program and shell command, so if you have bash, check man page of bash to read more details.
This works for me when running a jupyter notebook server in the background.
$> nohup jupyter notebook --allow-root > error.log &
Stop the nohup jupyter notebook is simple.
First, find the pid of jupyter:
$> ps -ef| grep jupyter
e.g output like:
root 11417 2897 2 16:00 pts/0 00:04:29 /path/to/jupyter-notebook
Then kill the process:
$> kill -9 11417
You can also simplify this by storing the pid with:
$> nohup jupyter notebook --allow-root > error.log & echo $!> pid.txt
i.e, you can stop the notebook with:
$> kill -9 $(cat pid.txt)
An alternative way to stop the jupyter notebook is quit from the notebook page.
You can use screen to run it.
screen -A -m -d -S anyscreenname jupyter notebook --no-browser
This will start jupyter in a screen and you can access screen using screen commands.
Actually, jupyter notebook & alone is not enough, the backend will still log to your screen.
What you need is, cited from this issue
jupyter notebook > /path/to/somefileforlogging 2>&1 &
You can start up the notebook in a screen or tmux session. Makes it easy to check error messages, etc.
For remote machines jupyter notebook & works fine.
However, it does not work on local machines when you close the terminal.
For local machines use tmux.
Not real sophisticated but it gets the job done:
#! /bin/bash
#probably should change this to a case switch
if [ "$1" == "end" ]
then
echo
echo "Shutting Down jupyter-notebook"
killall jupyter-notebook
echo
exit 0
fi
if [ "$1" == "-h" ]
then
echo
echo "To start : jnote <port> [default 8888]"
echo "To end : jnote end"
echo "This help : jnote -h"
echo
exit 0
fi
#cast from string
PORT=$(($1))
RETURN=0
PID=0
if [ "$PORT" == "0" ] || [ "$PORT" == "" ]; then PORT=8888; fi
echo
echo "Starting jupyter-notebook"
#background and headless, set port, allow colab access, capture log, don't open browser yet
nohup jupyter notebook \
--NotebookApp.allow_origin='https://colab.research.google.com' \
--port=$PORT --NotebookApp.port_retries=0 \
--no-browser >~/jnote.log 2>&1 &
RETURN=$?
PID=$!
#Wait for bg process to complete - add as needed
sleep 2
if [ $RETURN == 0 ]
then
echo
echo "Jupyter started on port $PORT and pid $PID"
echo "Goto `cat ~/jnote.log | grep localhost: | grep -v NotebookApp`"
echo
exit 0
else
echo
echo "Jupyter failed to start on port $PORT and pid $PID with return code $RETURN"
echo "see ~/jnote.log"
echo
exit $RETURN
fi
jupyter notebook & >> disown
put the process into the background by using jupyter notebook &
then type disown or disown <the process id>
you can close the terminal now
Detach Jupyter process from the controlling terminal and send all its input and output data to /dev/null which is a special device file that writes-off any data written to it.
jupyter notebook </dev/null &>/dev/null &
Lazy people like me would prefer to edit ~/.bash_aliases and create an alias:
alias jnote='jupyter notebook </dev/null &>/dev/null &'
Reference: https://www.tecmint.com/run-linux-command-process-in-background-detach-process/
As suggested by one of the users, using
jupyter notebook &
solves the issue. Regarding the comments stating that it kills the kernel after closing the terminal, probably you are using
jupyter-notebook &.
If you are using iTerm2 software,
first you need to set:
brew shellenv
Then start jupyter in nohup:
eval $(/usr/local/bin/brew shellenv)
nohup jupyter notebook &
Here is a command that launches jupyter in background (&) detached from the terminal process (disown) and without opening the browser (--no-browser)
No log will be shown on the terminal since they are redirected (&>) to a file jupyter_server.log so they can still be referred to later.
jupyter notebook --no-browser &> jupyter_server.log & disown
If you don't wan't to store the logs(discouraged):
jupyter notebook --no-browser &> /dev/null & disown
Thanks to the other answers this one is built upon: here and there

How to run command so that it runs even if console is closed?

I am deleting one folder on my server. The size of that folder is around 100GB.
I am using rm -rf cache
Is there is anyway that even if I close the console or disconnects the netwrok, the commands keep running until it got completed?
My server is on CENTOS v6
Thanks in adavnce
nohup rm -rf cache > /dev/null 2>&1
To run them in the background you can use '&' symbol along with your command
As follows,
nohup rm -rf cache &
You can read the status/output of your command at run time too
tail -10 nohup.out
this will give you the run time output (like output console).
Hope this helps.

Run a service automatically in a docker container

I'm setting up a simple image: one that holds Riak (a NoSQL database). The image starts the Riak service with riak start as a CMD. Now, if I run it as a daemon with docker run -d quintenk/riak-dev, it does start the Riak process (I can see that in the logs). However, it closes automatically after a few seconds. If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started (UPDATE: see answers for an explanation for this). In fact, no services are running at all. I can start it manually using the terminal, but I would like Riak to start automatically. I figure this behavior would occur for other services as well, Riak is just an example.
So, running/restarting the container should automatically start Riak. What is the correct approach of setting this up?
For reference, here is the Dockerfile with which the image can be created (UPDATE: altered using the chosen answer):
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y openssh-server curl
RUN curl http://apt.basho.com/gpg/basho.apt.key | apt-key add -
RUN bash -c "echo deb http://apt.basho.com precise main > /etc/apt/sources.list.d/basho.list"
RUN apt-get update
RUN apt-get -y install riak
RUN perl -p -i -e 's/(?<=\{http,\s\[\s\{")127\.0\.0\.1/0.0.0.0/g' /etc/riak/app.config
EXPOSE 8098
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
EDIT: -f changed to -F in CMD in accordance to sesm his remark
MY OWN ANSWER
After working with Docker for some time I picked up the habit of using supervisord to tun my processes. If you would like example code for that, check out https://github.com/Krijger/docker-cookbooks. I use my supervisor image as a base for all my other images. I blogged on using supervisor here.
To keep docker containers running, you need to keep a process active in the foreground.
So you could probably replace that last line in your Dockerfile with
CMD /bin/riak console
Or even
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
Note that you can't have multiple lines of CMD statements, only the last one gets run.
Using tail to keep container alive is a hack. Also, note, that with -f option container will terminate when log rotation happens (this can be avoided by using -F instead).
A better solution is to use supervisor. Take a look at this tutorial about running Riak in a Docker container.
The explanation for:
If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started
is as follows. Using CMD in the Dockerfile is actually the same functionality as starting the container using docker run {image} {command}. As Gigablah remarked only the last CMD is used, so the one written in the Dockerfile is overwritten in this case.
By using CMD /bin/riak start && tail -f /var/log/riak/erlang.log.1 in the Buildfile, you can start the container as a background process using docker run -d {image}, which works like a charm.
"If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started"
It sounds like you only want to be able to monitor the log when you attach to the container. My use case is a little different in that I want commands started automatically, but I want to be able to attach to the container and be in a bash shell. I was able to solve both of our problems as follows:
In the image/container, add the commands you want automatically started to the end of the /etc/bash.bashrc file.
In your case just add the line /bin/riak start && tail -F /var/log/riak/erlang.log.1, or put /bin/riak start and tail -F /var/log/riak/erlang.log.1 on separate lines depending on the functionality desired.
Now commit your changes to your container, and run it again with: docker run -i -t quintenk/riak-dev /bin/bash. You'll find the commands you put in the bashrc are already running as you attach.
Because I want a clean way to have the process exit later I make the last command a call to the shell's read which causes that process to block until I later attach to it and hit enter.
arthur#macro:~/docker$ sudo docker run -d -t -i -v /raid:/raid -p 4040:4040 subsonic /bin/bash -c 'service subsonic start && read -p "waiting"'
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4]
f27229a260c9
arthur#macro:~/docker$ sudo docker ps
[sudo] password for arthur:
ID IMAGE COMMAND CREATED STATUS PORTS
35f253bdf45a subsonic:latest /bin/bash -c service 2 days ago Up 2 days 4040->4040
arthur#macro:~/docker$ sudo docker attach 35f253bdf45a
arthur#macro:~/docker$ sudo docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
as you can see the container exits after you attach to it and unblock the read.
You can of course use a more sophisticated script than read -p if you need to do other clean up, such as stopping services and saving logs etc.
I use a simple trick whenever I start building a new docker container. To keep it alive, I use a ping in the entrypoint script.
So in the Dockerfile, when using debian, for instance, I make sure I can ping.
This is btw, always nice, to check what is accessible from within the container.
...
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y iputils-ping
...
ENTRYPOINT ["entrypoint.sh"]
And in the entrypoint.sh file
#!/bin/bash
...
ping 10.10.0.1 >/dev/null 2>/dev/null
I use this instead of CMD bash, as I always wind up using a startup file.

Jenkins seems to be the target for nohup in a script started via ssh, how can I prevent that?

I am trying to create a Jenkins job that restarts a program that runs all the time on one of our servers.
I specify the following as the command to run:
cd /usr/local/tool && ./tool stop && ./tool start
The script 'tool' contains a line like:
nohup java NameOfClass &
The output of that ends up in my build console instead of in nohup.out, so the job never terminates unless I terminate it manually, which terminates the program.
How can I cause nohup to behave the same way it does from a terminal?
If I understood the question correctly, Jenkins is killing all processes at the end of the build and you would like some process to be left running after the build has finished.
You should read https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller
Essentially, Jenkins searches for processes with some secret value in BUILD_ID environment variable. Just override it for the processes you want to be left alone.
In the new Pipeline jobs, setting BUILD_ID no longer prevents Jenkins from killing your processes once the job finishes. Instead, you need to set JENKINS_NODE_COOKIE:
sh 'JENKINS_NODE_COOKIE=dontKillMe nohup java NameOfClass &'
See the wiki on ProcessTreeKiller and this comment in the Jenkins Jira for more information.
Try adding the & in the Jenkins build step and redirecting the output using > nohup.out.
I had a similar problem with runnning a shell script from jenkins as a background process. I fixed it by using the below command:
BUILD_ID=dontKillMe nohup ./start-fitnesse.sh &
In your case,
BUILD_ID=dontKillMe nohup java NameOfClass &

disown a process in ksh

The "disown" command works in bash, but not in ksh.
If I have started a process in ksh, how can I "disown" it, so I can exit my shell.
(I know about nohup, but the process has already started!)
ksh93 supports the disown command. Also, some versions of nohup allow you to specify a process id with the -p option, instead of a command.
In ksh just run disown without -h option. That's it.
From the ksh(1) manual:
disown [ job... ]
Causes the shell not to send a HUP signal to each given job, or all active
jobs if job is omitted, when a login shell terminates.

Resources