Questions about Openstack Ceilometer meter.yaml and event_definitions.yaml - openstack

I am using Ceilometer Newton version. I do not want collect any metering sample (I can just turn off the compute polling agent instead) and only want to collect some event samples.
I configure the pipeline.yaml like below:
---
sources:
- name: meter_source
interval: 36000
meters: "!*"
sinks:
- meter_sink
sinks:
- name: meter_sink
transformers:
publishers:
- notifier://
I configure the event.yaml like below:
---
sources:
- name: event_source
events:
- "compute.instance.create.end"
- "compute.instance.delete.end"
- "compute.instance.resize.confirm.end"
sinks:
- event_sink
sinks:
- name: event_sink
transformers:
publishers:
- notifier://
I thought in this configure ceilometer will collects the events defined in event.yaml only. but the thing is not like what I expected. In fact the ceilometer collect more events than what I defined in event.yaml.
I realized the pipeline.yaml shoud be configured at compute nodes later, then I just turn off the ceilometer agent at compute node to avoid collecting metering sample.
However, ceilometer still collected more events then I defined in event_pipeline.yaml. Afterwards I find that meters.yaml contains events definitions. I deleted them all, then ceilometer only collects events defined in event_pipeline.yaml.
Then I come up with questions:
why should we let so many files event_pipeline.yaml, meter.yaml and event_definitions.yaml to determine one thing (like collecting which events)?
why do both meter.yaml and event_definitions.yaml contain event
definitions?

Related

SaltStack - mine.get is able to grab mine_function data from master, but not in .sls or jinja variable

I hope you can help me with a rather frustrating issue I have been having. I have been trying to remove static config from some config files and move this to Pillar/Mine data using Salt-Stack.
Everything is going well, with the exception of 1 specific task.
This is grabbing data (custom grain) from 3 specific minions to make 3 different variables in an .sls (context) or a jinja file (direct variable) on other minions, but I cannot seem to get it to work.
(My scenario is flexible as I can call this in either a state file or jinja variable in a config file.)
This is on AWS EC2 instances, but can be replicated away from AWS in my lab. The grain I need is: "public_ipv4" and the reason I cannot use the network.util in salt runner is because this is NAT'd and the box doesn't have a 2nd interface with the public IP assigned to it. (This cannot be changed)
Pillar data works and I have a init.sls for the mine function:
mine_functions:
grains.item:
- location
- environment
- roles
- srvtype
- instance
- az
- public_ipv4
- fqdn
- ipv4
- ipv6
(Also the custom grain: "public_ipv4" works being called by the minion so I know it is the not the grains themselves being incorrect.)
When targeting via the master using the below it brings back the requested information:
my-minion:
----------
minion-with-data-i-want-1:
----------
az:
c
environment:
dev
fqdn:
correct_fqdn
instance:
3
ipv4:
- Correct_local_ip
- 127.0.0.1
ipv6:
- ::1
- Correct_ip
location:
correct_location
public_ipv4:
Correct_public_ip
roles:
Correct_role
srvtype:
None
It is key to note here that the above comes from:
salt '*globbed_target*' mine.get '*minions-with-data-i-need-glob*' grains.item
This is from the master, but I cannot single out a specific grain by using indexing or any args/kwargs etc.
So I put some syntax into a state file and some jinja templates and I cannot get it to work. Here are a few I have tried so far:
Jinja:
{% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] %}
Above returns nothing.
State file:
- context:
- ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') }}
The above returns a dict error:
Context must be formed as a dict
Running latest salt-minion/master from apt.
Steps I have taken:
Running: salt '*' mine.update after every change and checking with: salt '*' mine.valid after every change and they show.
Any help is appreciated.
This looks like you are running into a classic problem. Not knowing what you are getting as the return value.
first your {# set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] #} returns nothing because it is a jinja comment. {% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item') %}
the next problem you have is that you are passing a list to context. when it is supposed to take a dict. the error isn't even related to mine.
try this instead
- context:
ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') | json}}
next learn to use slsutil.renderer to look at how things are rendered. such as salt minion slsutil.renderer salt://thing/init.sls default_renderer=jinja

How to write airflow logs to Elasticsearch?

I am using Airflow 1.10.5. Can't seem to find complete documentation or sample on how to setup remote logging using Elasticsearch. I saw airflow documentation about logging, but it wasn't helpful. I am trying to write the airflow (not task) logs to ES.
As far as I understand the docs, the ES log handler can only read from ES. You would have to setup your logging to print into a file, then use something like filebeat to post the file content to ES and Airflow can then read them back...
https://airflow.readthedocs.io/en/stable/howto/write-logs.html#writing-logs-to-elasticsearch
Writing Logs to Elasticsearch
Airflow can be configured to read task
logs from Elasticsearch and optionally write logs to stdout in
standard or json format. These logs can later be collected and
forwarded to the Elasticsearch cluster using tools like fluentd,
logstash or others.
I was able to achieve using [filebeat][1] shipper.
Input config section in filebeat.yml
</snip>
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path/to/logs/*.log
</snip>
Output config section in filebeat.yml
<snip>
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
</snip>
Good doc to read especially about airflow --> ES.

How to trigger an action upon change of state?

I am testing salt as a management system, using ansible so far.
How can I trigger an action (specifically, a service reload) when a state has changed?
In Ansible this is done via notify but browsing salt documentation I cannot find anything similar.
I found watch, which works the other way round: "check something, and if it changed to this and that".
there is also listen which seems to be closer to my needs (the documentation mentions a service reload) but I cannot put together the pieces.
To set an example, how the following scenario would work in salt: check a git repo (= create it if not existing or pull from it otherwise) and if it has changed, reload a service? The Ansible equivalent is
- name: clone my service
git:
clone: yes
dest: /opt/myservice
repo: http://git.example.com/myservice.git
version: master
force: yes
notify:
- restart my service if needed
- name: restart my service if needed
systemd:
name: myservice
state: restarted
enabled: True
daemon_reload: yes
Your example:
ensure my service:
git.latest:
- name: http://git.example.com/myservice.git
- target: /opt/myservice
service.running:
- watch:
- git: http://git.example.com/myservice.git
When there will be change in repo (clone for the first time, update etc.)
the state will be marked as "having changes" thus the dependent states -
service.running in this case - will require changes, for service it means to restart
What you are asking is covered in salt quickstart

How do I implement a 'pillar.example' from a SaltStack Formula?

If this explanation exists somewhere, I've spent 3 months trying to find it, and failed. I come from a Puppet background, however for various reasons I really want to try replacing it with Salt.
I've gotten a basic setup and I can code my own states and see them work without any issues. The documentation on this is pretty clear. Where I'm stuck is attempting to implement a community salt formula. I can include the formula with it's basic setup and they work fine, however I cannot figure out how to override the defaults from my pillar data. This seems to be where the Salt documentation is weakest.
The documentation states that you should check the pillar.example for how to configure the formula. The pillar.example gives the configuration part clearly, however nether the documentation or the pillar.example tell you how to include this into your pillar data.
In my case I'm trying to use the snmp-formula. I've got a basic setup for my salt file structure, which you can see from my file roots:
file_roots:
base:
- /srv/salt/base
- /srv/formulas/snmp-formula
Inside base I have two pillars:
base/
top.sls
common.sls
top.sls is very simple:
base:
'*':
- common
common.sls has all common config:
include:
- snmp
- snmp.conf
- snmp.trap
- snmp.conftrap
tcpdump:
pkg.latest:
- name: tcpdump
telnet:
pkg.latest:
- name: telnet
htop:
pkg.latest:
- name: htop
snmp:
conf:
location: 'Office'
syscontact: 'Example.com Admin <admin#example.com>'
logconnects: false
# vacm com2sec's (map communities into security names)
com2sec:
- name: mynetwork
source: 192.168.0.13/31
community: public
# vacm group's (map security names to group names)
groups:
- name: MyROGroup
version: v1
secname: mynetwork
- name: MyROGroup
version: v1c
secname: mynetwork
# vacm views (map mib trees to views)
views:
- name: all
type: included
oid: '.1'
# vacm access (map groups to views with access restrictions)
access:
- name: MyROGroup
context: '""'
match: any
level: noauth
prefix: exact
read: all
write: none
notify: none
# v3 users for read-write
rwusers:
- username: 'nagios'
passphrase: 'myv3password'
view: all
In common.sls I've included the snmp-formula and then followed the pillar.example from the formula to customize the configuration. However when I run a test with this I get the following error:
Data failed to compile:
----------
Detected conflicting IDs, SLS IDs need to be globally unique.
The conflicting ID is 'snmp' and is found in SLS 'base:common' and SLS 'base:snmp'
I'm not sure how to proceed with this. It seems like I would have to actually modify the community formula directly to achieve what I want, which seems like the wrong idea. I want to be able to keep the community formula up to date with it's repository and coming from the Puppet perspective, I should be overriding a modules defaults as I need, not modifying the modules directly.
Can someone please make the missing connection for me? How do I implement the pillar.example?
The Salt formula in question is here:
https://github.com/saltstack-formulas/snmp-formula
I have finally figured this out, and it was a problem with a fundamental misunderstanding of the differences between 'file_roots' and 'pillar_roots' as well as 'pillars' vs 'states'. I don't feel that the documentation is very clear in the Getting Started guide about these so I'll explain it, but first the answer.
ANSWER:
To implement the above pillar.example, simply create a dedicated snmp.sls file in your 'base' environment in your pillar data:
/srv/pillar/snmp.sls:
snmp:
conf:
location: 'Office'
syscontact: 'Example.com Admin <admin#example.com>'
logconnects: false
# vacm com2sec's (map communities into security names)
com2sec:
- name: mynetwork
source: 192.168.0.13/31
community: public
# vacm group's (map security names to group names)
groups:
- name: MyROGroup
version: v1
secname: mynetwork
- name: MyROGroup
version: v1c
secname: mynetwork
# vacm views (map mib trees to views)
views:
- name: all
type: included
oid: '.1'
mask: 80
# vacm access (map groups to views with access restrictions)
access:
- name: MyROGroup
context: '""'
match: any
level: noauth
prefix: exact
read: all
write: none
notify: none
# v3 users for read-write
rwusers:
- username: 'nagios'
passphrase: 'myv3password'
view: all
Your pillar_root must also include a top.sls (not to be confused with the top.sls in your file_roots for your states) like this:
/srv/pillar/top.sls
base:
'*':
- snmp
IMPORTANT: This directory and this top.sls for pillar data cannot exist or be included by your file_roots! This is where I was going wrong. For a complete picture, this is the the config I now have:
/etc/salt/master: (snippet)
file_roots:
base:
- /srv/salt/base
- /srv/formulas/snmp-formula
pillar_roots:
base:
- /srv/pillar
Inside /srv/salt/base I have a top.sls which includes a common.sls
for the 'base' environment. This is where the snmp-formula and it's states are included.
/srv/salt/base/top.sls:
base/
top.sls
common.sls
/srv/salt/base/common.sls:
include:
- snmp
- snmp.conf
- snmp.trap
- snmp.conftrap
tcpdump:
pkg.latest:
- name: tcpdump
telnet:
pkg.latest:
- name: telnet
htop:
pkg.latest:
- name: htop
Now the snmp parameter in the pillar data does not conflict with ID of the snmp state from the formula included by the state data.

What are "states" when using SaltStack?

I'm trying SaltStack after using Puppet for a while, but I can't understand their use of the word "state".
My understanding is that, for example, a light switch has 2 possible states - on or off. When I write my SLS configuration I am describing what state a server should be in. When I ask SaltStack to provision a server I issue the command salt '*' state.highstate. I understand that a server can be in a highstate (as described in my config) or not. All good so far.
But this page describes other states. It describes lowstate, highstate and overstate (amongst others) as layers. Does this mean a server passes through several states to get to a highstate? Or all states are maintained simultaneously as layers? Or can I configure multiple possible states in my SLS and have SaltStack switch between them? Or are they just layers to SaltStack that have 'state' in the name and I'm confused?
I'm probably missing something obvious, if anyone can nudge me in the right direction I think a lot of the documentation will become clear to me!
Here, top.sls wihch contain,
# cat top.sls
base:
'*':
- httpd_require
and,
# cat httpd_require.sls
install_httpd:
pkg.installed:
- name: httpd
service.running:
- name: httpd
- enable: True
- require:
- file: install_httpd
file.managed:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
High state:
We can see all the aspects of high state system while working with state files( .sls), There are three specific components.
High data:
SLS file:
High State
Each individual State represents a piece of high data(pkg.installed:'s block), Salt will compile all relevant SLS inside the top.sls, When these files are tied together using includes, and further glued together for use inside an environment using a top.sls file, they form a High State.
# salt 'remote_minion' state.show_highstate --out yaml
remote_minion:
install_httpd:
__env__: base
__sls__: httpd_require
file:
- name: /var/www/html/index.html
- source: salt://index1.html
- user: root
- group: root
- mode: 644
- require:
- pkg: install_httpd
- managed
- order: 10002
pkg:
- name: httpd
- installed
- order: 10000
service:
- name: httpd
- enable: true
- require:
- file: install_httpd
- running
- order: 10001
First, an order is declared, All States that are set to be first will have their order adjusted accordingly. Salt will then add 10000 to the last defined number (which is 0 by default), and add any States that are not explicitly ordered.
Salt will also add some variables that it uses internally, to know which environment (__env__) to execute the State in, and which SLS file (__sls__) the State declaration came from, Remember that the order is still no more than a starting point; the actual High State will be executed based first on requisites, and then on order.
"In other words, "High" data refers generally to data as it is seen by the user."
Low States:
""Low" data refers generally to data as it is ingested and used by Salt."
Once the final High State has been generated, it will be sent to the State compiler. This will reformat the State data into a format that Salt uses internally to evaluate each declaration, and feed data into each State module (which will in turn call the execution modules, as necessary). As with high data, low data can be broken into individual components:
Low State
Low chunks
State module
Execution module(s)
# salt 'remote_minion' state.show_lowstate --out yaml
remote_minion:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: httpd
order: 10000
state: pkg
- __env__: base
__id__: install_httpd
__sls__: httpd_require
enable: true
fun: running
name: httpd
order: 10001
require:
- file: install_httpd
state: service
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: managed
group: root
mode: 644
name: /var/www/html/index.html
order: 10002
require:
- pkg: install_httpd
source: salt://index1.html
state: file
user: root
Together, all this comprises a Low State. Each individual item is a Low Chunk. The first Low Chunk on this list looks like this:
- __env__: base
__id__: install_httpd
__sls__: httpd_require
fun: installed
name: http
order: 10000
state: pkg
Each low chunk maps to a State module (in this case, pkg) and a function inside that State module (in this case, installed). An ID is also provided at this level (__id__). Salt will map relationships (that is, requisites) between States using a combination of State and __id__. If a name has not been declared by the user, then Salt will automatically use the __id__ as the name.Once a function inside a State module has been called, it will usually map to one or more execution modules which actually do the work.
salt '\*' state.highstate
'*' refers to all the minions connected to the master.
'state.highstate' is used to run all modules / scripts mentioned in top.sls defined in master
To invoke a specific module / script on all minions, use the following salt command where the state information is defined in state.sls for apache in the example given below.
salt '\*' state.sls apache
To invoke the above salt call only on a specific minion, use the below command.
salt 'minion-name' state.sls apache
I don't know all levels of state, but when you run :
salt '*' state.highstate
Saltstack apply the states you provide in /srv/salt/top.sls.
If you write nothing in it, you can't apply an highstate.
You can apply other state with this command :
salt '*' state.sls state.example
A highstate is just the collection of states that is applied to your server. There is a process in the background where Salt's "state compiler" goes through several stages preparing the data in order to produce the highstate, but you don't really need to worry about those.
Things like the lowstate can help with debugging, but aren't necessary for day to day usage. The highstate is only applied once.

Resources