I am planning to write a log processing application using RabbitMQ, Symfony2 and the RabbitMqBundle.
The tool I am working on has to be highly available and must process millions of entries per day, so it's important that the consumers are always up and running (short breaks are fine), otherwise my queue might overflow after a while.
Are there best practices on how to manage the consumers (written in PHP), start/restart them in case of an error etc?
Thanks
I use this bash script to make sure that there are all required consumers running on imagepush.to:
#!/bin/bash
NB_TASKS=1
SYMFONY_ENV="prod"
TEXT[0]="app/console rabbitmq:consumer primary"
TEXT[1]="app/console rabbitmq:consumer secondary"
for text in "${TEXT[#]}"
do
NB_LAUNCHED=$(ps ax | grep "$text" | grep -v grep | wc -l)
TASK="/usr/bin/env php ${text} --env=${SYMFONY_ENV}"
for (( i=${NB_LAUNCHED}; i<${NB_TASKS}; i++ ))
do
echo "$(date +%c) - Launching a new consumer"
nohup $TASK &
done
done
If I remember correctly I took base from KnpBundles code.
Stop consumer
To stop your consumer with name my_consumer use
kill `ps aux | less | grep 'rabbitmq:consumer my_consumer' | grep -v grep | awk '{print $2}'`
ps aux | less | grep 'rabbitmq:consumer my_consumer' - will find all running processes of the consumer
grep -v grep - will exclude your own search process
awk '{print $2}' - get only process id from the row
kill - will terminate all found processes
Start consumer
To start your consumer with name my_consumer use
nohup /usr/bin/env php app/console rabbitmq:consumer consumer --env=prod &
I have a lot of consumers in the project and it became hard to restart them after deploy. And I started using Capistrano + Symfony plugin to deploy my project. I wrote a few custom tasks to start/stop/restart the consumers based on the yaml config. Tasks are based on the commands from above.
Related
My fan is whirring on my Ubuntu laptop and htop is showing my CPU as maxed:
However, looking at the processes ordered by CPU it doesn't seem like too much is going on other than gjs at 41.3%.
I'm assuming there are just a ton of gjs processes that are adding up to the rest of the CPU.
Is there anyway to work this out other than manually adding up the CPU%?
NAME="Ubuntu"
VERSION_ID="21.10"
You can sum up CPU usage as shown here.
ps -eo pcpu,command --sort=-pcpu | grep gjs | awk '{sum+=$1} END {print sum}'
The solution they linked actually sums memory, not CPU usage (probably a bug they never caught), I've fixed it so it should work for you.
If you want to make a shell script to reuse, write this to cpusum for example:
ps -eo pcpu,command --sort=-pcpu | grep "$1" | awk '{sum+=$1} END {print sum}'
then make it executable: chmod +x cpusum,
and run it ./cpusum gjs
I have installed grakn on unix and earlier it was working fine but now giving issues as it is not able to start.
I tried to run it using below command:
./grakn server start
Getting below error.
Starting Storage-FAILED!
Unable to start Storage
Please run 'grakn server status' or check the logs located under 'logs' directory.
There may be lot of things happening under the hood. Without looking into logs this is hard to tell what exactly happening. You can try killing all the processes and then remove associate pids from /tmp/ directory. Then retry starting grakn server.
$ for KILLPID in `ps ax | grep 'grakn' | awk '{print $1}'`; do kill -9 $KILLPID; done
$ ps -ef | grep defunct | grep -v grep | cut -b8-20 | xargs kill -9
$ rm -rf /tmp/grakn-*
Let me know if it helps.
As of now I am creating a topic one by one by using below command.
sh ./bin/kafka-topics --create --zookeeper localhost:2181 --topic sdelivery --replication-factor 1 --partitions 1
Since I have 200+ topics to be created. Is there any way to create a list of topic with a single command?
I am using 0.10.2 version of Apache Kafka.
This seems like more of a unix/bash question than a Kafka one: the xargs utility is specifically designed to run a command repeatedly from a list of arguments. In your specific case you could use:
cat topic_list.txt | xargs -I % -L1 sh ./bin/kafka-topics --create --zookeeper localhost:2181 --topic % --replication-factor 1 --partitions 1
If you want to do a "dry run" and see what commands will be executed you can replace the sh with echo sh.
Alternatively you can just make sure that your config files have default topic settings of --replication-factor 1 and --partitions 1 and just allow those topics to be automatically created the first time you send a message to them.
You could use Terraform or Kubernetes Operators which will not only help you create topics (with one command), but also manage them later if you did need to delete or modify their configs.
But without custom solutions, and only for purpose of batch-creation, you can use awk for this
Create a file
$ cat /tmp/topics.txt
test1:1:1
test2:1:2
Then use AWK system function to execute the kafka-topics script, and parse the file
$ awk -F':' '{ system("./bin/kafka-topics.sh --create --zookeeper localhost:2181 --topic=" $1 " --replication-factor=" $2 --partitions=" $3 " ) }' /tmp/topics.txt
Created topic "test1".
Created topic "test2".
And we can see the topics are created
$ ./bin/kafka-topics.sh --zookeeper localhost:2181 --list
test1
test2
Note: Hundreds of topics this quickly might overload Zookeeper, so might help to add a call to "; sleep 10" at the end.
As of Kafka 3.0, the Zookeeper flag is replaced by bootstrap-servers.
Webpages are loading very slow, it takes them around 6 seconds to even start sending the page data, which is then sent in a matter of 0.2 seconds and is generated in 0.19 seconds.
I doubt that it is caused by php or the browser, so the problem must be with the server which is handled by nginx and php5-fpm
A server admin said that indeed the problem was caused by a misconfigured fpm or nginx
How can I debug the cause of the slowdown?
Setup: php5.3, mysql5, linux, nginx, php5-fpm
This question is probably too broad for StackOverflow, as the question could span several pages and topics.
However if the question was just how do I do debug the performance of PHP-FPM then the answer would be much easier - use Strace and the script below.
#!/bin/bash
mkdir trc
rm -rf trc/*.trc
additional_strace_args="$1"
MASTER_PID=$(ps auwx | grep php-fpm | grep -v grep | grep 'master process' | cut -d ' ' -f 7)
summarise=""
#shows total of calls - comment in to get
#summarise="-c"
nohup strace -r $summarise -p $MASTER_PID -ff -o ./trc/master.follow.trc >"trc/master.$MASTER_PID.trc" 2>&1 &
while read -r pid;
do
if [[ $pid != $MASTER_PID ]]; then
#shows total of calls
nohup strace -r $summarise -p "$pid" $additional_strace_args >"trc/$pid.trc" 2>&1 &
fi
done < <(pgrep php-fpm)
read -p "Strace running - press [Enter] to stop"
pkill strace
That script will attach strace to all of the running php-fpm instances. Any requests to the web server that reach php-fpm will have all of their system calls logged, which will allow you to inspect which ones are taking the most time, which will allow you to figure out what needs optimising first.
On the other hand, if you can see from the Strace output that PHP-FPM is processing each request fast, it will also allow you to eliminate that as the problem, and allow you to investigate nginx, and how nginx is talking to PHP-FPM, which could also be the problem.
#Danack saved my life.
But I had to change the command to get the MASTER_PID:
MASTER_PID=$(ps auwx | grep php-fpm | grep -v grep | grep 'master process' | sed -e 's/ \+/ /g' | cut -d ' ' -f 2)
This function take hugh amount of time to calculate the status of a process, beacuse every time it has to ssh into the machine and find the status of a process.
I only have four machines and around 50+ process to monitor and the details are mentioned into configDaemonDetails.txt
like:
abc#sn123|Daemon_1|processname_1
abc#sn123|Daemon_2|processname_2
efg#sn321|Daemon_3|processname_3
How to reduce the time with doing ssh once into a machine and finding all its process informations as defined in the txt file. ?
CheckProcessStatus ()
{
echo " ***** Checking Process Status ***** "
echo "========================================================="
IFS='|'
cat configDaemonDetails.txt | grep -v "^#" | while read MachineDetail Daemon ProcessName
do
Status=`ssh -f -T ${MachineDetail} ps -ef | egrep -v "grep|less|vi|more" | grep "$ProcessName"`
RunTime=`echo "$Status" | sed -e 1'p' -e '1,$d' | awk '{print $5" "$6}'`
if [ -z "$Status" ]
then
echo "The Process is DOWN $Daemon | $ProcessName "
else
echo "The Process $Daemon | $ProcessName is up since $RunTime"
fi
done
echo "-----------------------------------------------------"
}
Thanks :)
Can't you just fetch the entire ps -ef output at once, and then parse it appropriately? I suspect that is what you are asking, and maybe all you want is an example of how to do that? If that is the case, say so and I'll flesh out an example.
SSH is a bit over kill for getting status of a process, I'd suggest using SNMP instead.
e.g, you can get a process list like this:
snmpwalk -v2c -cPASSWORD HOST 1.3.6.1.2.1.25.4.2.1
Take a look at this Nagios plugin that does process checks, and look in the code for the actual SNMP OIDs.