I followed the steps on the blog to get wordpress going
https://blog.pivotal.io/pivotal-cloud-foundry/products/getting-started-with-wordpress-on-cloud-foundry
when I do a cf push it keeps crashing with the following lines in the error
2016-05-14T15:41:44.22-0700 [App/0] OUT total size is 2,574,495 speedup is 0.99
2016-05-14T15:41:44.24-0700 [App/0] ERR fusermount: entry for /home/vcap/app/htdocs/wp-content not found in /etc/mtab
2016-05-14T15:41:44.46-0700 [App/0] OUT 22:41:44 sshfs | fuse: mountpoint is not empty
2016-05-14T15:41:44.46-0700 [App/0] OUT 22:41:44 sshfs | fuse: if you are sure this is safe, use the 'nonempty' mount option
2016-05-14T15:41:44.64-0700 [DEA/86] ERR Instance (index 0) failed to start accepting connections
2016-05-14T15:41:44.68-0700 [API/1] OUT App instance exited with guid cf2ea899-3599-429d-a39d-97d0e99280e4 payload: {"cc_partition"=>"default", "droplet"=>"cf2ea899-3599-429d-a39d-97d0e99280e4", "version"=>"c94b7baf-4da4-44b5-9565-dc6945d4b3ce", "instance"=>"c4f512149613477baeb2988b50f472f2", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1463265704}
2016-05-14T15:41:44.68-0700 [API/1] OUT App instance exited with guid cf2ea899-3599-429d-a39d-97d0e99280e4 payload: {"cc_partition"=>"default", "droplet"=>"cf2ea899-3599-429d-a39d-97d0e99280e4", "version"=>"c94b7baf-4da4-44b5-9565-dc6945d4b3ce", "instance"=>"c4f512149613477baeb2988b50f472f2", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1463265704}
^[[A
my manifest file:
cf-ex-wordpress$ cat manifest.yml
---
applications:
- name: myapp
memory: 128M
path: .
buildpack: https://github.com/cloudfoundry/php-buildpack
host: near
services:
- mysql-db
env:
SSH_HOST: user#abc.com
SSH_PATH: /home/user
SSH_KEY_NAME: sshfs_rsa
SSH_OPTS: '["cache=yes", "kernel_cache", "compression=no", "large_read"]'
vagrant#vagrant:~/Documents/shared/cf-ex-wordpress$
Please check your SSH mount, more details at https://github.com/dmikusa-pivotal/cf-ex-wordpress/issues
Related
I have application details in respective vars like below. For example, myapp1 in "QA" environment would look like the below:
cat myapp1_QA.yml
---
APP_HOSTS:
- myapphost7:
- logs:
- /tmp/web/apphost7_access
- /tmp/web/apphost7_error
- myapphost9:
- logs:
- /tmp/web/apphost9_access
- /tmp/web/apphost9_error
- /tmp/web/apphost9_logs
WEB_HOSTS:
- mywebhost7:
- logs:
- /tmp/webserver/webhost7.pid
In this example I wish to create a dynamic host containing the 3 hosts
myapphost7
myapphost9
mywebhost7
and each host has variable log which can be looped to get the file paths:
Below is my ansible play:
---
- hosts: localhost
tasks:
- include_vars:
file: "{{ playbook_dir }}/{{ appname }}_{{ myenv }}.yml"
- name: Dsiplay dictionary data
debug:
msg: "{{ item[logs] }}"
loop: "{{ APP_HOSTS }}"
I get the below error:
ansible-playbook read.yml -e appname=myapp1 -e myenv=QA
TASK [Dsiplay dictionary data] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'logs' is undefined\n\nThe error appears to be in '/root/read.yml': line 8, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Dsiplay dictionary data\n ^ here\n"}
My requirement is to store "myapphost7", "myapphost9", "mywebhost7" in group using add_hosts: hosts: while a variable logs: having the list of log files.
Note: if no hosts mywebhost7 is defined under WEB_HOSTS: or APP_HOSTS: then nothing should be added to the dynamic host.
Can you please suggest?
I'm trying to use the Euca 5 ansible installer to install a single server for all services "exp-euca.lan.com" with two node controllers "exp-enc-[01:02].lan.com" running VPCMIDO. The install goes okay and I end up with a single server running all Euca services including being able to run instances but the ansible scripts never take action to install and configure my node servers. I think I'm misunerdstanding the inventory format. What could be wrong with the following? I don't want my main euca server to run instances and I do want the two node controllers installed and running instances.
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
zone:
hosts:
exp-euca.lan.com:
nodes:
hosts:
exp-enc-[01:02].lan.com:
All of the plays related to nodes have a pattern similar to this where they succeed and acknowledge the main server exp-euca but then skip the nodes.
2021-01-14 08:15:23,572 p=57513 u=root n=ansible | TASK [zone assignments default] ***********************************************************************************************************************
2021-01-14 08:15:23,596 p=57513 u=root n=ansible | ok: [exp-euca.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_facts": {"host_zone_key": "1"}, "ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"]}
2021-01-14 08:15:23,604 p=57513 u=root n=ansible | skipping: [exp-enc-01.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"], "skip_reason": "Conditional result was False"}
It should be node, not nodes, i.e.:
node:
hosts:
exp-enc-[01:02].lan.com:
The documentation for this is currently incorrect.
Here is the document I refer to
1.sample_server.yaml
type: os.nova.server
version: 1.0
properties:
name: cirros_server
flavor: m1.small
image: b86fb462-c5c2-4a08-9fe4-c9f86d05763d
networks:
- network: external-net
2.Execute the following command line
# openstack cluster create --profile pserver --desired-capacity 2 mycluster
# openstack cluster receiver create --type webhook --cluster mycluster --action CLUSTER_SCALE_OUT --params count=2 r_01
# export ALRM_URL01='http://vip:8777/v1/webhooks/aac3433a-40de-4d7d-830c-e0035f2a4d13/trigger?V=1&count=2'
# aodh alarm create --type gnocchi_resources_threshold --aggregation-method mean --name cpu-high --metric cpu_util --threshold 70 --comparison-operator gt --granularity 300 --evaluation-periods 1 --alarm-action $ALRM_URL01 --repeat-actions False --query metadata.user_metadata.cluster_id=$MYCLUSTER_ID --resource-type instance --resource-id f7e0e8a6-51a3-422d-b631-7ddaf65b3dfb
3.log into each cluster nodes and run some CPU burning workloads there to drive the CPU utilization high
I added log output to /usr/lib/python2.7/site-packages/aodh/notifier/rest.py when trigger the alert request
class RestAlarmNotifier(notifier.AlarmNotifier):
def notify(self, action, alarm_id, alarm_name, severity, previous,
current, reason, reason_data, headers=None):
body = {'alarm_name': alarm_name, 'alarm_id': alarm_id,
'severity': severity, 'previous': previous,
'current': current, 'reason': reason,
'reason_data': reason_data}
headers['content-type'] = 'application/json'
kwargs = {'data': json.dumps(body),
'headers': headers}
max_retries = self.conf.rest_notifier_max_retries
session = requests.Session()
LOG.info('#########################')
LOG.info(session)
LOG.info(kwargs)
LOG.info(action.geturl())
LOG.info('#########################')
session.mount(action.geturl(),
requests.adapters.HTTPAdapter(max_retries=max_retries))
resp = session.post(action.geturl(), **kwargs)
LOG.info('$$$$$$$$$$$$$$$$$$$$$$$')
LOG.info(resp.content)
LOG.info('$$$$$$$$$$$$$$$$$$$$$$$')
Some error messages are output in the /var/log/aodh/notifier.log log, as follows:
enter image description here
The reason is the error caused by adding the body request parameter, the direct post request can be successful, for example, using curl request without the body parameter
curl -g -i -X POST 'http://vip:8777/v1/webhooks/34e91386-7176-4b30-bc17-5c3503712696/trigger?V=1'
Aodh related version packages are as follows:
python2-aodhclient-1.1.1-1.el7.noarch
openstack-aodh-api-7.0.0-1.el7.noarch
openstack-aodh-common-7.0.0-1.el7.noarch
openstack-aodh-listener-7.0.0-1.el7.noarch
python-aodh-7.0.0-1.el7.noarch
openstack-aodh-notifier-7.0.0-1.el7.noarch
openstack-aodh-evaluator-7.0.0-1.el7.noarch
openstack-aodh-expirer-7.0.0-1.el7.noarch
Can anyone point me in the right direction? Thanks.
The problem has been solved. Here is the document I refer to
Modify aodh rest.py(aodh/notifier/rest.py)
https://github.com/openstack/aodh/blob/master/aodh/notifier/rest.py#L79
Under the headers['content-type'] , add this line: headers['openstack-api-version'] = 'clustering 1.10'
Restart aodh service
Everything works well until we wanted to set the NB_USER to the logged in user. When changed the config to run as root and start.sh as default cmd, getting the below error in the log and the container is failing to start. Any help is highly appreciated
After running the container as root, getting the below log for the error:
Set username to: user1
Relocating home dir to /home/user1
mv: cannot move '/home/jovyan' to '/home/user1': Device or resource busy
Here is the config.yaml
singleuser:
defaultUrl: "/lab"
uid: 0
fsGid: 0
hub:
extraConfig: |
c.KubeSpawner.args = ['--allow-root']
c.Spawner.cmd = ['start.sh','jupyterhub-singleuser']
def notebook_dir_hook(spawner):
spawner.environment = {'NB_USER':spawner.user.name, 'NB_UID':'1500'}
c.Spawner.pre_spawn_hook = notebook_dir_hook
from kubernetes import client
def modify_pod_hook(spawner, pod):
pod.spec.containers[0].security_context = client.V1SecurityContext(
privileged=True,
capabilities=client.V1Capabilities(
add=['SYS_ADMIN']
)
)`enter code here`
return pod
c.KubeSpawner.modify_pod_hook = modify_pod_hook
When I am trying to run a task asynchronously as another user using become in ansible plabook, I am getting "Job not found error". Can some one suggest me how can I successfully check the async job status.
I am using ansible version 2.7
I read in some articles suggesting use the async_status task with same become user as async task, to read the job status.
I tried that solution but still I am getting the same "job not found error"
- hosts: localhost
tasks:
- shell: startInstance.sh
register: start_task
async: 180
poll: 0
become: yes
become_user: venu
- async_status:
jid: "{{start_task.ansible_job_id}}"
register: start_status
until: start_status.finished
retries: 30
become: yes
become_user: venu
Expected Result:
I should be able to Fire and forget the job
Actual_Result:
{"ansible_job_id": "386361757265.15925428", "changed": false, "finished": 1, "msg": "could not find job", "started": 1}