How to run a Symfony command from Heroku's Scheduler - symfony

I've configured Heroku's Scheduler to run a Symfony 2 command:
bash app/console myapp:send:confirmations --verbose
And set it to be run each 10 minutes.
But, in the logs I see those messages:
2015-09-10T13:01:25.313711+00:00 heroku[api]: Starting process with command `bash app/console myapp:send:confirmations` by scheduler#addons.heroku.com
2015-09-10T13:01:44.151426+00:00 heroku[scheduler.7629]: Starting process with command `bash app/console myapp:send:confirmations --verbose`
2015-09-10T13:01:44.811500+00:00 heroku[scheduler.7629]: State changed from starting to up
2015-09-10T13:01:45.565021+00:00 app[scheduler.7629]: app/console: line 2: ?php: No such file or directory
2015-09-10T13:01:45.565093+00:00 app[scheduler.7629]: app/console: line 19: unexpected EOF while looking for matching `''
2015-09-10T13:01:45.565096+00:00 app[scheduler.7629]: app/console: line 28: syntax error: unexpected end of file
2015-09-10T13:01:46.291606+00:00 heroku[scheduler.7629]: State changed from up to complete
2015-09-10T13:01:46.278800+00:00 heroku[scheduler.7629]: Process exited with status 2
Those are the three relevant that are confusing me:
2015-09-10T13:01:45.565021+00:00 app[scheduler.7629]: app/console: line 2: ?php: No such file or directory
2015-09-10T13:01:45.565093+00:00 app[scheduler.7629]: app/console: line 19: unexpected EOF while looking for matching `''
2015-09-10T13:01:45.565096+00:00 app[scheduler.7629]: app/console: line 28: syntax error: unexpected end of file
I'm a bit confused: the file app/console seems not exist, but then the script encounters an unexpected EOF (but the file doesn't exist o.O) and then an unexpected end of file (isn't this the same thing as the message immediatley before?
What am I doing wrong?

Use php instead of bash to launch the console:
php app/console myapp:send:confirmations --verbose
I have the same behaviour (crash) on Ubuntu 15.04:
$ bash app/console
app/console: line 2: ?php: No such file or directory
app/console: line 18: unexpected EOF while looking for matching `''
app/console: line 23: syntax error: unexpected end of file
$ php app/console -v
Symfony version 2.7.4 - app/prod
...
It seems that the shebang from the start of app/console is ignored and the PHP interpreter is not called:
#!/usr/bin/env php
<?php
....
Here are explanations from Aaron Copley:
It's not executable
Run the binary with absolute or relative path
So if you mark the file as executable and launch the script with relative path, the PHP interpreter will be called:
$ chmod +x app/console
$ ./app/console -v
Symfony version 2.7.4 - app/prod

Related

Getting Error while running pig script on google Dataproc Cluster, all the parameter are defined correctly

This is my pig script -
fs -cp -f gs://$codepath/db_password.sh file://$dataprochome/db_password.sh;
fs -cp -f gs://$codepath/jdbc_daily_load_tables.py file://$dataprochome/jdbc_daily_load_tables.py;
sh chmod +x $dataprochome/db_password.sh;
sh chmod +x $dataprochome/jdbc_daily_load_tables.py;
sh $dataprochome/db_password.sh $dataprochome $stg_gcs_bucket $se_stg_gcs_bucket $target_schema $target_table_stg_gm_add_attributes_orbit $target_table_orbit_delivery_partner_icc $kvenv;
All the input variable are defined properly still getting below error
2023-02-09 20:05:36,221 [main] ERROR org.apache.pig.Main - ERROR 2997: Encountered IOException. org.apache.pig.tools.parameters.ParseException: Encountered "<EOF>" at line 1, column 6.
Was expecting one of:
IDENTIFIER
OTHER
LITERAL
SHELLCMD
Details at logfile: /tmp/17114c5e-af3d-4a09-89c4-324250436a76/pig_1675973135671.log
2023-02-09 20:05:36,240 [main] INFO org.apache.pig.Main - Pig script completed in 712 milliseconds (712 ms)
As per comments in the answer provided by #OneCricketeer, the error was resolved by removing the spaces between the arguments passed to .sh file in the pig script.
That's a bash script. Not pig ... If you're running that in pig, then that perfectly explains why it's failing to parse that file.
You need to use sh rather than pig on shell scripts

Hadoop 2.6 start-dfs.sh errors on Centos 6.7

I use this tutorial to install Hadoop 2.6 on Centos 6.7 with Java 1.8.0_72 and everything goes well before execution of start-dfs.sh from Hadoop-home/sbin/srart-dfs.sh. Below is the output:
[hadoop#10 sbin]$ start-dfs.sh
16/02/26 21:47:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: /etc/bashrc: line 65: id: command not found
localhost: /etc/bashrc: line 65: id: command not found
localhost: /usr/bin/env: bash: No such file or directory
localhost: /etc/bashrc: line 65: id: command not found
localhost: /etc/bashrc: line 65: id: command not found
localhost: /usr/bin/env: bash: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: /etc/bashrc: line 65: id: command not found
0.0.0.0: /etc/bashrc: line 65: id: command not found
0.0.0.0: /usr/bin/env: bash: No such file or directory
16/02/26 21:47:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
It seems there is something wrong with /etv/bashrc # line 65. But, I checked and there is nothing I modified.
I run CentOS 6.7 final releas using Parallel VM manager of my Mac which is a 64-bit machine.
Thanks in advance
Edit your core-site.xml and add this part:
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
Then create the folder accordingly, example command:
mkdir -p /app/hadoop/tmp
chown yourHadoopUsername:yourHadoopGroupName /app/hadoop/tmp
chmod 777 /app/hadoop/tmp
Format your datanode:
hdfs namenode -format
Start your hadoop:
start-dfs.sh
start-yarn.sh

openstack - stack.sh fails on syntax errors

I am trying to install Devstack as non-root user, but getting errors.
The log directory contains only broken symbolic links stack.sh.log and stack.sh.log.summary (pointing to nonexistent files).
I've used the sample local.conf - the only change is that I defined the $DEST.
OS: RHEL 6.6
STDOUT/ERR:
/home/john/scripts/openstack/devstack/functions-common: line 68: conditional binary operator expected
/home/john/scripts/openstack/devstack/functions-common: line 68: syntax error near `"$1"'
/home/john/scripts/openstack/devstack/functions-common: line 68: ` [[ -v "$1" ]]'
./stack.sh: line 119: GetDistro: command not found
/home/john/scripts/openstack/devstack/functions-common: line 68: conditional binary operator expected
/home/john/scripts/openstack/devstack/functions-common: line 68: syntax error near `"$1"'
/home/john/scripts/openstack/devstack/functions-common: line 68: ` [[ -v "$1" ]]'
/home/john/scripts/openstack/devstack/stackrc: line 48: isset: command not found
/home/john/scripts/openstack/devstack/.localrc.auto: line 84: enable_service: command not found
/home/john/scripts/openstack/devstack/stackrc: line 498: is_package_installed: command not found
/home/john/scripts/openstack/devstack/stackrc: line 666: get_default_host_ip: command not found
/home/john/scripts/openstack/devstack/stackrc: line 668: die: command not found
WARNING: this script has not been tested on
./stack.sh: line 179: die: command not found
./stack.sh: line 197: export_proxy_variables: command not found
./stack.sh: line 202: disable_negated_services: command not found
./stack.sh: line 209: is_package_installed: command not found
./stack.sh: line 209: install_package: command not found
[sudo] password for john:
./stack.sh: line 231: is_ubuntu: command not found
./stack.sh: line 238: is_fedora: command not found
./stack.sh: line 301: safe_chown: command not found
./stack.sh: line 302: safe_chmod: command not found
./stack.sh: line 310: safe_chown: command not found
Traceback (most recent call last):
File "/home/john/scripts/openstack/devstack/tools/outfilter.py", line 24, in <module>
import argparse
ImportError: No module named argparse
First, fix the missing module by using yum:
yum install python-argparse.noarch
Also you will need to run ./unstack.sh to clear the logs.
I had still faced this issue, so further debugging lead me to an issue when both python-zaqarclient and python-openstackclient were installed. As a quick solution I removed python-zaqarclient:
sudo pip uninstall python-zaqarclient
Then
- apt-get upgrade
- apt-get dist-upgrade
- ./stack.sh
Hope this helps!

Memory issue with meteor up (mup) on Digital Ocean

I couldn't find existing posts related to my issue. On a Digital Ocean Droplet, mup setup went fine, but when I try to deploy, I get the following error. Any ideas? Thanks!
root#ts:~/ts-deploy# mup deploy
Meteor Up: Production Quality Meteor Deployments
Building Started: /root/TS/
Bundling Error: code=137, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
bash: line 1: 31217 Killed meteor build --directory /tmp/dc37af3e-eca0-4a19-bf1a-d6d38bb8f517
Below are the logs. node -v indicates I am using 0.10.31. How do I check which script is exiting with the error? Any other ideas? Thanks!
error: Forever detected script exited with code: 1
error: Script restart attempt #106
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #107
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #108
stepping down to gid: meteoruser
stepping down to uid: meteoruser
After I went back to an old backup of the DO Droplet, and re-ran mup setup and mup deploy, I now get this in the command line output
Building Started: /root/TS
Bundling Error: code=134, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
FATAL ERROR: JS Allocation failed - process out of memory
bash: line 1: 1724 Aborted (core dumped) meteor build --directory /tmp/bfdbcb45-9c61-435f-9875-3fb304358996
and this in the logs:
>> stepping down to gid: meteoruser
>> stepping down to uid: meteoruser
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
The memory issue stems from using DigitalOcean's $5 Droplet. To solve the problem, I added swap to the server, as explained in detail below.
Create and enable the swap file using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Next prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
This only worked for me by increasing the swap space to 1gb:
Make all swap off
sudo swapoff -a
Resize the swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=1024
Make swapfile usable
sudo mkswap /swapfile
Make swapon again
sudo swapon /swapfile

Drush rsync code 23 error

I have a path issue. I can't seem to figure out why I am getting this code 23 error. Here is the complete error message: I am guessing that rsync cant write to my local /private/tmp directory.
Here is the output:
```
Do you really want to continue? (y/n): y
rsync: link_stat "/tmp/SGDU55.sql" failed: No such file or directory (2)
rsync error: some files could not be transferred (code 23) at /SourceCache/rsync/rsync-42/rsync/main.c(1400) [receiver=2.6.9]
Could not rsync from xxx#staging-5244.prod.xxx.com:/tmp/SGDU55.sql to [error]
/private/tmp/-to-drupal_db.sql.p0YIBu
```
Here is the drush simulate command abbreviated output.
```
$ drush sql-sync #aq6 #aqsolo --simulate
.....
Calling system(rsync -e 'ssh -i /Users/dave.ferrera/.vagrant.d/insecure_private_key' -akz --exclude=".git" --exclude=".gitignore" --exclude=".hg" --exclude=".hgignore" --exclude=".hgrags" --exclude=".bzr" --exclude=".bzrignore" --exclude=".bzrtags" --exclude=".svn" /private/tmp/-to-drupal_db.sql.iXOzSo vagrant#12.12.12.12:tmp/drupal_db.sql);
Calling system(ssh -i /Users/dave.ferrera/.vagrant.d/insecure_private_key vagrant#12.12.12.12 'mysql --database=drupal_db --host=localhost --user=root --password=password --silent < tmp/drupal_db.sql 2>&1');
$
```
Is there a way change the /private/tmp path to something else?
I have added chmod 1777 to /private and /private/tmp
Since I was using Acquia the problem seemed to be solved as soon as I changed to the correct %dum-dir path.
so now I have:
'%dump-dir' => '/mnt/tmp/',
If you alias root is beginning with 'root' => '/mnt/gfs..... than it should be the same.

Resources