Ceilometer Alarm Error - With Heat (OpenStack) - openstack

i have a problem i can't image why....
i have downloaded an autoscaling heat template for openstack. In this template files are the following resources:
cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
description: Scale-up if the average CPU > 50% for 1 minute
meter_name: cpu_util
statistic: avg
period: 60
evaluation_periods: 1
threshold: 50
alarm_actions:
- {get_attr: [server_scaleup_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
comparison_operator: gt
cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
description: Scale-down if the average CPU < 15% for 1 minute
meter_name: cpu_util
statistic: avg
period: 60
evaluation_periods: 1
threshold: 15
alarm_actions:
- {get_attr: [server_scaledown_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
comparison_operator: lt
When i would launch this stack, openstack tell the following error:
NotFound: resources.cpu_alarm_low: Not Found (HTTP 404) (Request-ID: req-37d6c753-40db-4596-86a4-e1d10f0c531a)
Under Ressources Type OS::Ceilometer::Alarm is avaible.
Have i forgot to load something in my localrc?
Here a part of my localrc
# Enable the Ceilometer devstack plugin
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git
#Ceilometer-services
enable_service ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector ceilometer-api
#Ceilometer-alarm
enable_service ceilometer-alarm-notifier ceilometer-alarm-evaluator
#Ceilometer-ipmi (Use only if required)
enable_service ceilometer-aipmi
Hope u can help me.
Best Regards,
Chris

Recheck the template. Or you may refer to this http://superuser.openstack.org/articles/simple-auto-scaling-environment-with-heat and recheck the syntax, I think you are missing the resource ID, which tells that which resource you want to monitor.

Related

unauthorized: authentication required - trying to pull or run a public image

I'm trying to follow along this blog about using Docker with R.
I followed basic Docker set up steps and am able to run the hello world image.
I'm on a old 2009 Mac and had to use Docker Toolbox.
I'm in a place with weak internet connection and am using a personal hotspot.
Each time I try to run docker run --rm -p 8787:8787 rocker/verse I wait for a few minutes and see a downloading message, then I get a message "docker: unauthorized: authentication required."
I found this separate documentation which advised me to add a password:
docker run --rm -p 8787:8787 -e PASSWORD=blah rocker/rstudio
But I got the same result "docker: unauthorized: authentication required."
I did some Google searching and found some posts both here on SO and on Github but was unable to identify what is causing this error in my specific case.
I suspect my weak internet connection might have something to do with it since I seem to be able to download for about 10 or 15 minutes before seeing this message.
Here is Docker info:
Macs-MacBook:~ macuser$ docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 2
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.116-boot2docker
Operating System: Boot2Docker 18.09.6 (TCL 8.2.1)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.951GiB
Name: default
ID: XMCE:OBLV:CKEX:EGIB:PHQ7:MLHF:ZJSA:PGYN:OIMM:JI67:ETCI:JKBH
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Does anyone know where I can look to next in order to be able to pull and or run the rocker image?

How to set Monolog log level per channel in Symfony

I'm using Symfony with Monolog to log data to loggly.com. My symfony app uses following configuration:
loggly:
type: loggly
token: ...
level: INFO
bubble: true
channels: ["app", "request"]
As you can see, I'm logging the channels app and request. The minimum log level is INFO for both channels.
Now i would like to distinguish the log level per channel like this:
Channel "app": INFO (and above)
Channel "request": ERROR (and above)
Is there a way to adjust my configuration or do I have to solve this programatically?
Thanks in advance
ninsky
loggly_app:
type: loggly
token: ...
level: INFO
bubble: true
channels: ["app"]
loggly_request:
type: loggly
token: ...
level: ERROR
bubble: true
channels: ["request"]
an additional option would be to have environment specific configs (e.g. when app-info logging wouldn't be required in production).

Concourse: Upload to artifactory fails with curl error (outstanding read data remaining)

I want to automatically discover new stemcell versions on pivnet, download them from pivotal network and upload them to a local artifactory.
However, the upload (to artifactory) task fails with the following error:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 410M 0 0 100 410M 0 439M --:--:-- --:--:-- --:--:-- 440M
curl: (18) transfer closed with outstanding read data remaining
I get this error on uploading any kind of release and stemcell.
My pipeline configuration looks like this:
---
resource_types:
- name: artifactory
type: docker-image
source:
repository: pivotalservices/artifactory-resource
- name: pivnet
type: docker-image
source:
repository: pivotalcf/pivnet-resource
tag: latest-final
resources:
- name: git-repository
type: git
source:
uri: ssh://<git-repository>
private_key: ((ssh_key))
- name: stemcell
type: pivnet
check_every: 1m
source:
api_token: ((pivnet-api-token))
product_slug: stemcells
- name: artifactory
type: artifactory
source:
endpoint: https://((artifactory_domain)):443/artifactory
repository: "/<path>/stemcells/bosh-vsphere-esxi-ubuntu-trusty-go_agent"
regex: "bosh-vsphere-esxi-ubuntu-trusty-go_agent-(?<version>.*).tgz"
username: ((artifactory_username))
password: ((artifactory_password))
jobs:
- name: download-and-upload
plan:
- get: <git-repository>
- get: stemcell
trigger: true
version: every
- task: rename-files
file: <git-repository>/tasks/rename-stemcell/task.yml
- put: artifactory
params: { file: renamed-stemcell/stemcell/bosh-vsphere-esxi-ubuntu-trusty-go_agent*.tgz }
...
I use the concourse version v3.9.1 and the stemcell bosh-vsphere-esxi-ubuntu-trusty-go_agent/3468.21. Concourse is deployed as a BOSH release.
Any hints what could be the root cause of this error?
I tried to manually issue the curl command which resulted in the same error.
Then, I tried to upload the stemcell manually.
By this, it came out that there were missing deploy permissions on artifactory.

Questions about Openstack Ceilometer meter.yaml and event_definitions.yaml

I am using Ceilometer Newton version. I do not want collect any metering sample (I can just turn off the compute polling agent instead) and only want to collect some event samples.
I configure the pipeline.yaml like below:
---
sources:
- name: meter_source
interval: 36000
meters: "!*"
sinks:
- meter_sink
sinks:
- name: meter_sink
transformers:
publishers:
- notifier://
I configure the event.yaml like below:
---
sources:
- name: event_source
events:
- "compute.instance.create.end"
- "compute.instance.delete.end"
- "compute.instance.resize.confirm.end"
sinks:
- event_sink
sinks:
- name: event_sink
transformers:
publishers:
- notifier://
I thought in this configure ceilometer will collects the events defined in event.yaml only. but the thing is not like what I expected. In fact the ceilometer collect more events than what I defined in event.yaml.
I realized the pipeline.yaml shoud be configured at compute nodes later, then I just turn off the ceilometer agent at compute node to avoid collecting metering sample.
However, ceilometer still collected more events then I defined in event_pipeline.yaml. Afterwards I find that meters.yaml contains events definitions. I deleted them all, then ceilometer only collects events defined in event_pipeline.yaml.
Then I come up with questions:
why should we let so many files event_pipeline.yaml, meter.yaml and event_definitions.yaml to determine one thing (like collecting which events)?
why do both meter.yaml and event_definitions.yaml contain event
definitions?

OS::Heat::SoftwareDeployment is staying stuck in CREATE_IN_PROGRESS status

I am trying customise new instances created within openstack mikata, using HEAT templates. Using OS::Nova::Server with a script in user_data works fine.
Next the idea is to do additional steps via OS::Heat::SoftwareConfig.
The config is:
type: OS::Nova::Server
....
user_data_format: SOFTWARE_CONFIG
user_data:
str_replace:
template:
get_file: vm_init1.sh
config1:
type: OS::Heat::SoftwareConfig
depends_on: vm
properties:
group: script
config: |
#!/bin/bash
echo "Running $0 OS::Heat::SoftwareConfig look in /var/tmp/test_script.log" | tee /var/tmp/test_script.log
deploy:
type: OS::Heat::SoftwareDeployment
properties:
config:
get_resource: config1
server:
get_resource: vm
The instance is setup nicely (the script vm_init1.sh above runs fine) and one can login, but he "config1" example above is never executed.
Analysis
- The base image is Ubuntu 16.04, created with disk-image-create and including "vm ubuntu os-collect-config os-refresh-config os-apply-config heat-config heat-config-script"
- From "openstack stack resource list $vm" one see that deployment never fisnihe, with OS::Heat::SoftwareDeployment status=CREATE_IN_PROGRESS
- "openstack stack resource show $vm config1" shows resource_status=CREATE_COMPLETE
- Within the vm, /var/log/cloud-init-output.log shows the output of the script vm_init1.sh, but no trace of the 'config1' script. The log os-apply-config.log is empty, is that normal?
How does one troubleshoot OS::Heat::SoftwareDeployment configs?
(I have read https://docs.openstack.org/developer/heat/template_guide/software_deployment.html#software-deployment-resources)

Resources