Can't see Elasticsearch daemon - unix

I'm trying to run elasticsearch as a daemon on a server.
I have run this command:
./bin/elasticsearch -d -p somefilename
but I can't find any proof of the daemon actually running, since ps -e | grep elastic does not produce any output. How can I see the process?

Elasticsearch runs on JVM. to see the PID of elasticsearch, you can use "jps" command.
with "ps ef" , only you can see java related process.
if you insist on using ps you can try this:
ps axms |grep -i elasticsearch
"axms" option show the details of all process.

Related

SSH between N number of servers using script

I have n number of servers like c0001.test.cloud.com, c0002.test.cloud.com, c0003.test.cloud.com and I want to do the ssh between these servers like:
from Server: c0001 do the ssh to c0002 and then exit the server.
Come back to c0001 do the ssh to c0003 and then exit the server.
So in this way it will execute the script without entering any input during runtime and we can have n number of servers.
I have written one script :
str1=c0001.test.cloud.com,c0002.test.cloud.com,c0003.test.cloud.com
string="$( cut -d ',' -f 2- <<< "$str1" )"
echo "$string"
for j in $(echo $string | sed "s/,/ /g")
do
ssh appAccount#j
done
But this script is not running fine. I have also checked it by passing parameters
like: -o StrictHostKeyChecking=no and <<'ENDSSH' but it is not working.
Assuming the number of commands you want to run are small, you could:
Create a script of commands that will run from c0001.test.cloud.com to each of the servers. For example, create a file on your local machine called commands.sh with:
hosts="c0002.test.cloud.com c0003.test.cloud.com"
for host in $hosts do
ssh -o StrictHostKeyChecking=no -q appAccount#$host <command 1> && <command 2>
done
On your local machine, ssh to c0001.test.cloud.com and execute the commands in commands.sh:
ssh -o StrictHostKeyChecking=no -q appAccount#c0001.test.cloud.com 'bash -s' < commands.sh
However, if your requirements become more complex, a more robust solution might be to use a cluster administration tool such as ClusterShell

How to filter the Running jobs in a particular server in Autosys..?

I am new to Autosys. Is it possible to find all the running jobs on a particular server in Autosys. (Like what we are doing in TWS or other monitoring tools).
The command to show running jobs on a specific server is this:
autorep -M servername -d
- note that the servername value has to match exactly what's in the JIL.
Also the output isn't detailed like you'd get from "autorep -d -j jobname" - but then it does what you asked!
If you want the job detail you might want to use the previous suggestion. Or you could use autorep -M as I suggested to get the running jobs and pipe that list through autorep -j to get detail just for those specific jobs.
Yes you can find out the list of jobs running in autosys. Put the command in GUI as autorep -j % | find " RU ". in Unix machine use grep in place of find
autorep -M agent_name -d
autorep -M -d shows jobs that are in running on the specified node.
autorep -M machine1 -d
Machine Name Max Load Current Load Factor O/S Status
machine1 --- --- 1.00 Sys Agent Online
Current Jobs:
Job Name Machine Status Load Priority
CMD1 machine1 RUNNING 0 0
CMD2 machine1 RUNNING 0 0
autorep -I MachineName | Find "RU"

Run a service automatically in a docker container

I'm setting up a simple image: one that holds Riak (a NoSQL database). The image starts the Riak service with riak start as a CMD. Now, if I run it as a daemon with docker run -d quintenk/riak-dev, it does start the Riak process (I can see that in the logs). However, it closes automatically after a few seconds. If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started (UPDATE: see answers for an explanation for this). In fact, no services are running at all. I can start it manually using the terminal, but I would like Riak to start automatically. I figure this behavior would occur for other services as well, Riak is just an example.
So, running/restarting the container should automatically start Riak. What is the correct approach of setting this up?
For reference, here is the Dockerfile with which the image can be created (UPDATE: altered using the chosen answer):
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y openssh-server curl
RUN curl http://apt.basho.com/gpg/basho.apt.key | apt-key add -
RUN bash -c "echo deb http://apt.basho.com precise main > /etc/apt/sources.list.d/basho.list"
RUN apt-get update
RUN apt-get -y install riak
RUN perl -p -i -e 's/(?<=\{http,\s\[\s\{")127\.0\.0\.1/0.0.0.0/g' /etc/riak/app.config
EXPOSE 8098
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
EDIT: -f changed to -F in CMD in accordance to sesm his remark
MY OWN ANSWER
After working with Docker for some time I picked up the habit of using supervisord to tun my processes. If you would like example code for that, check out https://github.com/Krijger/docker-cookbooks. I use my supervisor image as a base for all my other images. I blogged on using supervisor here.
To keep docker containers running, you need to keep a process active in the foreground.
So you could probably replace that last line in your Dockerfile with
CMD /bin/riak console
Or even
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
Note that you can't have multiple lines of CMD statements, only the last one gets run.
Using tail to keep container alive is a hack. Also, note, that with -f option container will terminate when log rotation happens (this can be avoided by using -F instead).
A better solution is to use supervisor. Take a look at this tutorial about running Riak in a Docker container.
The explanation for:
If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started
is as follows. Using CMD in the Dockerfile is actually the same functionality as starting the container using docker run {image} {command}. As Gigablah remarked only the last CMD is used, so the one written in the Dockerfile is overwritten in this case.
By using CMD /bin/riak start && tail -f /var/log/riak/erlang.log.1 in the Buildfile, you can start the container as a background process using docker run -d {image}, which works like a charm.
"If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started"
It sounds like you only want to be able to monitor the log when you attach to the container. My use case is a little different in that I want commands started automatically, but I want to be able to attach to the container and be in a bash shell. I was able to solve both of our problems as follows:
In the image/container, add the commands you want automatically started to the end of the /etc/bash.bashrc file.
In your case just add the line /bin/riak start && tail -F /var/log/riak/erlang.log.1, or put /bin/riak start and tail -F /var/log/riak/erlang.log.1 on separate lines depending on the functionality desired.
Now commit your changes to your container, and run it again with: docker run -i -t quintenk/riak-dev /bin/bash. You'll find the commands you put in the bashrc are already running as you attach.
Because I want a clean way to have the process exit later I make the last command a call to the shell's read which causes that process to block until I later attach to it and hit enter.
arthur#macro:~/docker$ sudo docker run -d -t -i -v /raid:/raid -p 4040:4040 subsonic /bin/bash -c 'service subsonic start && read -p "waiting"'
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4]
f27229a260c9
arthur#macro:~/docker$ sudo docker ps
[sudo] password for arthur:
ID IMAGE COMMAND CREATED STATUS PORTS
35f253bdf45a subsonic:latest /bin/bash -c service 2 days ago Up 2 days 4040->4040
arthur#macro:~/docker$ sudo docker attach 35f253bdf45a
arthur#macro:~/docker$ sudo docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
as you can see the container exits after you attach to it and unblock the read.
You can of course use a more sophisticated script than read -p if you need to do other clean up, such as stopping services and saving logs etc.
I use a simple trick whenever I start building a new docker container. To keep it alive, I use a ping in the entrypoint script.
So in the Dockerfile, when using debian, for instance, I make sure I can ping.
This is btw, always nice, to check what is accessible from within the container.
...
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y iputils-ping
...
ENTRYPOINT ["entrypoint.sh"]
And in the entrypoint.sh file
#!/bin/bash
...
ping 10.10.0.1 >/dev/null 2>/dev/null
I use this instead of CMD bash, as I always wind up using a startup file.

How to identify which Daemon Process is writing to the file

I need to identify a daemon process that is writing to a log file periodically. The problem is that I dont have any idea which process is doing the job, and I need to show some progress to the client by tomorrow. Anybody has any clue?
I have already sorted out the daemon processes running in the system with the help of the PPID. Any help would be appreciated.
Also I think it is possible (rarely) for a daemon not to have a PPID as 1. How can we find it out then?
Try the fuser command on your log file, which will display the PIDs of processes using it.
Example:
$ fuser file.log
file.log: 3065
lsof gives a list of open files with the processes.
So lsof | grep <filename> should help you.
You can use auditctl.
# sudo apt-get install auditd
# sudo /sbin/auditctl -w /path/to/file -p war -k hosts-file
-w watch /etc/hosts
-p warx watch for write, attribute change, execute or read events
-k hosts-file is a search key.
# sudo /sbin/ausearch -f /path/to/file | more
Gives output such as
type=UNKNOWN[1327] msg=audit(1459766547.822:130): proctitle=2F7573722F7362696E2F61706163686532002D6B007374617274
type=PATH msg=audit(1459766547.822:130): item=0 name="/path/to/file" inode=141561 dev=08:00 mode=0100444 ouid=33 ogid=33 rdev=00:00 nametype=NORMAL
type=CWD msg=audit(1459766547.822:130): cwd="/"
type=SYSCALL msg=audit(1459766547.822:130): arch=c000003e syscall=2 success=yes exit=41 a0=7f3c23034cd0 a1=80000 a2=1b6 a3=8 items=1 ppid=24452 pid=6797 auid=42949672
95 uid=33 gid=33 euid=33 suid=33 fsuid=33 egid=33 sgid=33 fsgid=33 tty=(none) ses=4294967295 comm="apache2" exe="/usr/sbin/apache2" key="hosts-file"

How to shutdown supervisor process correctly/completely?

I am using supervisor to launch and manage a nginx process. So far this works perfectly. The problem I am having is shutting down the instance.
I have tried using "supervisorctl -c shutdown [all]" and this shuts down the daemon and in the supervisorctl interactive console it says nginx is stopped. However, if I do a ps -A | grep nginx command it still appears in the list.
My config for the nginx instance is as follows:
[program:nginx]
command=./bin/nginx
-p /home/me/sites/project.domain.com/
-c project/etc/nginx.conf
directory=/home/me/sites/project.domain.com
autostart=true
autorestart=true
redirect_stderr=true
exitcodes=0
stopsignal=TERM
Any suggestion why nginx could not be shutting down?
Have you made sure you are not starting up nginx in daemonized mode? It is important you start all your child-processes of supervisor in non-daemonized mode. I currently don't have nginx boot options at hand, but this might give you a start in the right direction.

Resources