How to stop Gluster volume forcely - volume

according to $ gluster volume help
volume stop <VOLNAME> [force] - stop volume specified by <VOLNAME>
But even I run this accordingly.
$ gluster volume stop <VOLNAME> force
it still prompted me like
Stopping volume will make its data inaccessible. Do you want to continue? (y/n)
How could we skip this confirmation process?

Below command will help you to skip the confirmation process.
echo 'y' | gluster volume stop VolumeName

Check whether this helps:
echo 'y' | gluster volume stop force

Related

Control remote pi GPIO with domoticz

There are 2 pi in this setup:
- PI-domo: running domoticz
- PI-pump: controlling a pump with one GPIO
Those pi are far away, but can communicate through network. PI-domo has some passwordless ssh login setup to pi-pump, and contains three scripts:
- pump_on.sh: sends value to gpio with ssh to turn pump on and returns 1
`ssh pi#pi-pump -n "echo 0 > /sys/class/gpio/gpio18/value" && echo 1`
pump_off.sh: sends value to gpio with ssh to turn pump off and returns 0
ssh pi#pi-pump -n "echo 1 > /sys/class/gpio/gpio18/value" && echo 0
pump_status.sh: returns 1 if pump is on, 0 if pump is off.
All three scripts work as expected when launched in bash, but I can not find how to call them with domoticz. I created a virtual switch and set those as script:///.....[on off].sh but domoticz doesn't seem to be running any of them. nor could I find a place to read the status...
Any idea or link to a RECENT (working) tutorial would be welcome!
Found the issue: stupid me.
It turns out domoticz process was running as root and root didn't have the key setup for passwordless ssh.
I know that this is a old thread and it is answered already, but I have stumbled on the same issue and found that online answers lacked detail. So, here it goes:
On PI-domo run
sudo su to become root
Generate a new key using ssh-keygen -t rsa -b 4096 -C "nameofyourkey"
Copy your key to PI-pump by using ssh-copy-id -i /root/.ssh/yourkey.pub pi#pi-pump
ssh to pi-pump to test that ssh agent for root is working, and if all is well exit and go back to become a pi user.
Note 1: Although logging in as root of PI-domo, it is critical that pump_off and pump_status.sh contain pi#pi-pump and not root#pi-pump or this approach will fail.
Note 2: Domoticz log indicates that the above process has some error by outputting Error: Error executing script command (/home/pi/domoticz/scripts/pump_off.sh). returned: 65280. Note the 65280 error in particular

Asterisk does not start up after Trixbox reboots

I have been struggling with this for a while and cant seem to find the solution. I am running Trixbox v2.8.0.4 Asterisk 1.6
Whenever my box loses power or when you reboot etc asterisk does not start - Unable to connect to remote asterisk (does /var/run/asterisk.ctl exist?)
Which means I need to login and do an amportal start/ amportal restart. After this asterisk works 100% again.
Does anyone have any suggestions?
system v5 init.d script run one-by-one.
In case of trixbox that mean if sendmail NOT started, you not get started asterisk.
Sendmail will not start if you have incorrect domain name. If you have no dns, use something like trixbox.local
The problem for one of my customers in this situation was SELinux. Try:
# restorecon -R /etc/asterisk
# restorecon -R /var/lib/asterisk
# restorecon -R /var/run/asterisk.ctl
# restorecon -R /var/spool/asterisk
This fixed the issue for them.

Shell Script to Check for Status of a informatica workflow

We have two Informatica jobs that run in parallel.
One starts at 11.40 CET and it has around 300 Informatica workflows in it out of which one is fact_sales.
The other job runs at 3.40 CET and it has around 115 workflows in it many of which are dependent on fact_sales in term of data consistency.
The problem is fact_sales should finish before certain workflows in process 2 starts for data to be accurate, but this doesnt happen generally.
What we are trying to do is to split the process 2 in such a way that fact_sales dependent workflows run only after the fact_sales has finished.
Can you provide me a way to go about writing a unix shell script that check the status of this fact_sales and if it successfull then kicks off other dependent workflows and if not then it should send a failure mail.
thanks
I don't see the need to write a custom shell script for this. Most of this is pretty standard/common functionality that can be implemented using Command Task and event waits.
**Process1 - runs at 11:50**
....workflow
...
fact_sales workflow. **Add a command task at the end
**that drops a flag, say, fact_sales_0430.done
...
....workflow..500
And all the dependent processes will have an event wait that waits on this .done file. Since there are multiple dependant workflows, make sure none of them deletes the file right away. You can drop this .done file at the end of the day or when the load starts for the next day.
workflow1
.....
dependantworkflow1 -- Event wait, waiting on fact_sales_0430.done (do not delete file).
dependantworkflow2 -- Event wait, waiting on fact_sales_0430.done (do not delete file).
someOtherWorkflow
dependantworkflow3 -- Event wait, waiting on fact_sales_0430.done (do not delete file).
....
......
A second approach can be as follows -
You must be running some kind of scheduler for launching these workflows.. since Informatica cant schedule multiple workflows in a set, it can only handle worklet/sessions at that level of dependency mgmt.
From the scheduler, create a dependency across the sales fact load wf and the other dependent workflows..
I think below mentioned script will work for you. Please udpate the parameters.
WAIT_LOOP=1
while [ ${WAIT_LOOP} -eq 1 ]
do
WF_STATUS=`pmcmd getworkflowdetails -sv $INFA_INTEGRATION_SERVICE -d $INFA_DOMAIN -uv INFA_USER_NAME -pv INFA_PASSWORD -usd Client -f $FOLDER_NAME $WORKFLOW_NAME(fact_sales) | grep "Workflow run status:" | cut -d'[' -f2 | cut -d']' -f1`
echo ${WF_STATUS} | tee -a $LOG_FILE_NAME
case "${WF_STATUS}" in
Aborted)
WAIT_LOOP=0
;;
Disabled)
WAIT_LOOP=0
;;
Failed)
WAIT_LOOP=0
;;
Scheduled)
WAIT_LOOP=0
;;
Stopped)
WAIT_LOOP=0
;;
Succeeded)
WAIT_LOOP=0
;;
Suspended)
WAIT_LOOP=0
;;
Terminated)
WAIT_LOOP=0
;;
Unscheduled)
WAIT_LOOP=0
;;
esac
if [ ${WAIT_LOOP} -eq 1 ]
then
sleep $WAIT_SECONDS
fi
done
if [ ${WF_STATUS} == "Succeeded" ]
then
pmcmd startworkflow -sv $INFA_INTEGRATION_SERVICE -d $INFA_DOMAIN -uv INFA_USER_NAME -pv INFA_PASSWORD -usd Client -f $FOLDER_NAME -paramfile $PARAMETER_FILE $WORKFLOW_NAME(dependent_one) | tee $LOG_FILE_NAME
else
(echo "Please find attached Logs for Run" ; uuencode $LOG_FILE_NAME $LOG_FILE_NAME )| mailx -s "Execution logs" $EMAIL_LIST
exit 1
fi
I can see you have main challenge - keep dependency between large number of infa workflows.
You have two options-
You can use some automated scheduling tool to set the dependency and run them one by one properly. There are many free tool but depending on your comfort/time/cost etc. you should choose. link here.
Secondly you can create your custom job-scheduler. I did a similar scheduler using UNIX script, oracle table. So here are steps for that -
Categorize all your workflows into groups. independent flow should go to group 1 and dependent flows on group 1 goes to group2 and so on.
Set your process to pick up one by one from above groups and kick them off. If kick off queue is empty then it should wait. call it loop2.
Keep a polling loop that will check status of kicked off flows. If failed, aborted etc. fail the process, mail to user and mark all 'in-queue/dependent' flows to failed. If running keep on polling. If succeeded give control to loop 2.
-if kick off queue is empty then go to next group only if all workflow in that group succeeded.
This is a bit tricky process but it paid off once you set it up. You can add as many workflows as you want and your maintenance will be much more smoother compared to infa scheduler or infa worklet etc.
You can fire a query from repository database using tables such REP_SESS_LOG and check if the status of the fact sales has succeeded or not. Then only you can proceed with the second job.

KSH: Block two process from running at the same time

I have two process that running at random time and I want to force them not to ever run at the same time due to reader-writer problem. My thought is whenever a process run, I create a LOCK file, both process has a logic of checking whether a LOCK exist. If LOCK is existed, then sleep for bit and wake up and check it again. Here is a small piece of it
if [[ ! -f ${INPUT_DIR}/LOCK ]]
then
#Create LOCK file
cat /dev/null > ${INPUT_DIR}/LOCK
retcode=${?}
if [[ ${retcode} -ne 0 ]]
then
echo `date` "Error in creating LOCK file by processA.sh - Error code: " ${retcode} >> ${CORE_LOG}
exit
fi
echo `date` "LOCK turns on by processA.sh" >> ${CORE_LOG}
...
rm ${INPUT_DIR}/LOCK
fi
Howver this does not QUITE stop the two process from running at the same time. There are rare time when both process would get pass the first IF checking if the log exist (if both process get invoke at the same time and there was no LOCK exist, very likely that it will get pass the first IF statement), both try to create a LOCK file, since cat /dev/null > ${INPUT_DIR}/LOCK will not generate an error, even when LOCK is already exist. Is there a solution to this?
For the main versions of unix, the preferred solution is to use a lock directory, I would assume this is true for linux, but I haven't had to test it recently.
Creating a directory is an atomic process, and only 1 of the processes will succeed, assuming that you are making a static name like /bin/mkdir -p /tmp/myProjWorkSpace/LOCK. If you need to have information embedded in your lock, then you need a file, and you need sepqrate subdirs per process, possibly add the processID (.$$) to the dir name.
I hope this helps.

write a background process to check process is still active

In UNIX, I have a utility, say 'Test_Ex', a binary file. How can I write a job or a shell script(as a cron job) running always in the background which keeps checking if 'Test_Ex' is still running every 5 seconds(and probably hide this job). If it is running, do nothing. If not, delete a directory at the specified path.
Try this script:
pgrep Test_Ex > /dev/null || rm -r dir
If you don't have pgrep, use
ps -e -ocomm | grep Test_Ex || ...
instead.
Utilities like upstart, originally part of the Ubuntu linux distribution I believe, are good for monitoring running tasks.
The best way to do this is to not do it. If you want to know if Test_Ex is still running, then start it from a script that looks something like:
#!/bin/sh
Test_Ex
logger "Test_Ex died"
rm /p/a/t/h
or
#!/bin/sh
while ! Test_ex
do
logger "Test_Ex terminated unsuccesfully, restarting in 5 seconds"
sleep 5
done
Querying ps regularly is a bad idea, and trying to monitor it from cron is a horrible, horrible idea. There seems to be some comfort in the idea that crond will always be running, but you can no more rely on that than you can rely on the wrapper script staying alive; either one can be killed at any time. Waking up every 10 seconds to query ps is just a waste of resources.

Resources