What permissions does a cron job run with on cPanel? Or even better, can I run a cron as a specific user? I had this working on a plesk panel but can't seem to get it going on cPanel.
I am trying to run a cron job for a Symfony project
php-cli /home/appname/public_html/website/app/console appname:images:clean
Within this command in Symfony I am trying to log what I am doing. The error that I get back from the cron is:
[UnexpectedValueException]
The stream or file "/home/appname/public_html/website/app/logs/dev.appname.log" could
not be opened: failed to open stream: Permission denied
The permissions on the file are
-rw-rw-r--+ cpaneluser:cpaneluser
You have probably added cron job to your own user's crontab instead of cpaneluser's one.
You can edit other user's crontab with:
crontab -u cpaneluser -e
And paste this cron job there.
Another solution could be to change the way you handle permissions in general, but that a bit more complex topic.
Related
I have used wp_schedule_event but this is not full my requirements.It can be triggered on any click or reload the page.
How can i create cron job that automatically trigger on a specific time without any action in wordpress.
The only way, as far as I know is to use the cron ability of your server.
For example, in Red Hat based distributions such as CentOS, crontab files are stored in the /var/spool/cron directory, while on Debian and Ubuntu files are stored in the /var/spool/cron/crontabs directory. Open crontab file and write something like:
* * * * * php /var/www/html/your-website/your-cron.php
This will execute the file your-cron.php every minute.
Note: Although you can edit the user crontab files manually, it is recommended to use the crontab command.
We have UNIX based Informatica. We have recently upgraded Informatica from 9.6 to 10.1 .
We have two users:
a) pmprod - Other Application user
b) powercenter - Used for installation purpose
We have shell script file to take repository backup, which we used to run on daily basis.
The problem is even if we execute this script from "pmprod" user, the
repository backup file is created by "powercenter" user, which we
don't want.
Before upgrade it was running successfully
Executing shell script with pmprod user
After execution of script if we checked the user it shows powercenter and not pmprod. PFB screenshot
Repository backup is created with "powercenter" user
We have used below command in shell script file
cd /app/powercenter/server10/server/bin/
pmrep connect -r PCREPO_TALEN_AWS_QA -n Administrator -X PMPASS -d PCDOMAIN_TALEN_AWS_QA
pmrep backup -o backup_qa_20170717.rep
Please suggest do we need to provide specific permission to any file or any workaround we need to to.
I'm going to discuss a few ques.... which will help you to arrive at a solution...
First set of discussion....
so, because of this user difference, are you facing any prob?
because, the command , pmrep is using the "repository" Administrator's user and pwd to rep bkp and rep restore.
So, even if you have to restore a repository, the same user which generated the rep bkp would wrk.
I'm not sure, what prob you are facing because of this?
Did you try running a restore ? Did you face any probs?
Second set of discussion...
Can u tell me in whose user-account you ran the bkp commands?
Did you use the pmprod unix user or powercenter unix user's account to run the pmrep comands?
Ok... that makes sense... so before you upgraded, you had pmprod as the default user for starting your powercenter processes. Now after the upgrade, you have configured for powercenter user to start your processes.. as such, any file created by an informatica command will have its owner as the informatica user which in this case is powercenter regardless of which user invokes the command. You could create a command task in workflow manager to chown the file and that will sort your problem out or look here for alternatives https://network.informatica.com/thread/12401
Environment : Hortonworks Sandbox HDP 2.2.4
Issue : Unable to run the hadoop commands present in the shell scripts as a root user. The oozie job is getting triggered as a root user, but when the hadoop fs or any mapreduce command is executed, then it runs as yarn user. As yarn, doesn’t have access to some of the file system , so the shell script is failing to execute. Let me know what changes I need to do , for making it run the hadoop commands as root user.
It is an expected behaviour to get Yarn in place whenever we are invoking shell actions in oozie. Yarn user only have the capabilities to run shell actions. One thing we can do is to give access permissions to Yarn on the file system.
This is more like a shell script question than an Oozie question. In theory, Oozie job runs as the user who submits the job. In a kerberos' env, the user is whoever signed in with keytab/password.
Once job is running on Hadoop cluster, in order to change the ownership of command, you can use "sudo" within your shell script. In your case, you may also want to make sure user "yarn" is allowed to sudo to the commands you want to execute.
Add below property to workflow:
HADOOP_USER_NAME=${wf:user()}
I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.
I'm really stuck. I've used cron on many machines, but I simply can not get it to work on an Ubuntu server. I've tried everything I can think over the weekend and I'm baffled. I'm hoping someone can help me out.
I can verify that cron is running via pgrep cron. However, pgrep crond returns nothing. I am trying to run a simple shell script (test.sh) with cron:
#!/bin/sh
/usr/bin/touch /home/jarvis/test.txt
I have run chmod +x on the shell script and my crontab looks like this:
01 * * * * /home/jarvis/test.sh
I also have a new line ending after the line.
Try redirecting all output to a file and see if there are any error messages that can help you diagnose the problme, i.e.
01 * * * * /home/jarvis/test.sh > /tmp/jarvis_test.log 2>&1
Also, if you created any of those files from a windows environment, don't forget dos2unix filename
edit
My point is that you won't know for what your script is doing in the crontab environment unless you see if it is outputing an error message. Error messages also go to local crontab user's email, so try mail (as the same user) and see if you have a bunch of messages from your crontab.
pgrep crond returning nothing sounds like the problem, but I'm not a sysadmin. Maybe you want to flag this and ask for moderator to move to https://serverfault.com/
I hope this helps.
I think it is related to the user permissions. Try putting in default crontab hourly,weekly files, if it works there then your cron is good. Also, check /var/log/messages for any cron entry. View /etc/crontab for detailed configuration.