How to send custom metric from collectd (stackdriver agent) to stackdriver - stackdriver

I can't send data to strackdriver. It's simple example.
I'm using syntax what I found in example for collectd.
Stackdriver documentation don't have info about how to send custom data from exec module.
What am I doing wrong?
This is collectd.conf
LoadPlugin exec
<Plugin "exec">
Exec "apache" "/etc/stackdriver/collectd.d/dir.sh"
</Plugin>
This is dir.sh
#!/bin/bash
FOLDER="/var/www/"
while true; do
DU=$(du -shm ${FOLDER} | awk '{print $1}')
echo "PUTVAL \"projects/project_name/custom.googleapis.com/folder/completesolar\" interval=60 N:${DU}"
sleep 60
done
Script output
$/etc/stackdriver/collectd.d/dir.sh
PUTVAL "projects/project_name/custom.googleapis.com/folder/completesolar" interval:60 N:1155
I enabled debug mode and found this error:
[2018-09-21 00:45:55] utils_cmd_putval: handle_putval (fh = 0x3e71d8f040, buffer = PUTVAL "projects/project_name/custom.googleapis.com/folder/completesolar" interval=60 N:1155);
[2018-09-21 00:45:55] No such dataset registered: custom.googleapis.com/folder/completesolar
I created this metric and found it in stackdriver console:
http://joxi.ru/a2XlPGvi1VzJL2
This is json for creating my metric:
{
"name": "projects/project_name/metricDescriptors/custom.googleapis.com/folder/completesolar",
"metricKind": "GAUGE",
"valueType": "DOUBLE",
"unit": "By",
"description": "Folder bytes used",
"displayName": "Folder usage",
"type": "custom.googleapis.com/folder/completesolar",
"metadata": {
"launchStage": "GA",
"samplePeriod": "60s",
"ingestDelay": "0s"
}
}

This should be work.
/etc/stackdriver/collectd.d/dir.conf
LoadPlugin exec
<Plugin "exec">
Exec "apache" "/path/to/dir.sh"
</Plugin>
LoadPlugin target_set
PreCacheChain "PreCache"
<Chain "PreCache">
<Rule "dir">
<Match regex>
Plugin "^exec$"
PluginInstance "^dir$"
</Match>
<Target "set">
MetaData "stackdriver_metric_type" "custom.googleapis.com/folder/completesolar"
# MetaData "stackdriver_metric_type" "custom.googleapis.com/%{plugin_instance}/%{type_instance}"
# MetaData "label:name" "dir usage"
</Target>
</Rule>
</Chain>
/path/to/dir.sh
#!/bin/bash
INTERVAL="$COLLECTD_INTERVAL"
HOSTNAME="$COLLECTD_HOSTNAME"
FOLDER="/var/www/"
while true; do
DU=$(du -sm ${FOLDER} | awk '{print $1}')
# https://collectd.org/documentation/manpages/collectd-exec.5.shtml
# PUTVAL Identifier [OptionList] Valuelist
# Identifier: <host>/<plugin>-<plugin_instance>/<type>-<type_instance>
# Type: See /opt/stackdriver/collectd/share/collectd/types.db
echo "PUTVAL ${HOSTNAME}/exec-dir/gauge-usage/ interval=${INTERVAL} N:${DU}"
sleep "${INTERVAL}"
done
See below if you want to (pre) create or delete a metric.
https://cloud.google.com/monitoring/custom-metrics/creating-metrics

Looks like you are following this guide to ingest exec metrics. I think the rewrite rules did not work so the metric ends up being ingested though a regular agent metrics path, which rejects unrecognized metrics. You might need to modify the metadata for your metric by using filter chain . I would suggest you to revisit the troubleshooting guide.

Related

Azure Powershell - Problem with assigning private IP address to an NIC

Here is my script
$my_vnet = Get-AzVirtualNetwork -Name <vnet_name>
$my_subnet = Get-AzVirtualNetworkSubnetConfig -Name <subnet_name> -VirtualNetwork $my_vnet
Add-AzNetworkInterfaceIpConfig -Name ext-ipconfig6 -NetworkInterface $my_nic -
Subnet $my_subnet -PrivateIpAddress 10.0.0.6
There is no error when running the script. If I use the following command to check, I do see the IP object created.
Get-AzNetworkInterfaceIpConfig -Name ext-ipconfig6 -NetworkInterface $my_nic
...
{
"Name": "ext-ipconfig6",
"PrivateIpAddress": "10.0.0.6",
"PrivateIpAllocationMethod": "Static",
"Subnet": {
"Id": "blabla"
},
"Primary": false
}
However, I don't see it created on the portal.
Comparing with others created in the portal, I see other properties like Etag, Id, ProvisioningState, ...etc. Where did I do wrong...?
Thanks!
You are just creating the Network Interface IP configuration and not setting it to the existing Network interface itself.
I tested the same script which resulted as below :
To fix the above you will have to add | Set-AzNetworkInterface after the Add-AzNetworkInterfaceIpConfig command like below :
$vnet = Get-AzVirtualNetwork -Name "ansuman-vnet" -ResourceGroupName "ansumantest"
$nic= Get-AzNetworkInterface -Name "ansumannic01" -ResourceGroupName "ansumantest"
Add-AzNetworkInterfaceIpConfig -Name "IPConfig2" -NetworkInterface $nic -Subnet $vnet.Subnets[0] -PrivateIpAddress "10.0.0.7" | Set-AzNetworkInterface
Output:

How do I get just the STDOUT of a salt state?

my output now
I'm learning salt stack right now and I was wondering if there was a way to get the stdout of a salt state and put it into a document and then send it to the master. Or is there a better way to do this?
To achieve this, we'll have to save the execution of the script in a variable. It will contain a hash containing keys that are showing up under changes:. Then the contents of this variable (stdout) can be written to a file.
{% set script_res = salt['cmd.script']('salt://test.sh') %}
create-stdout-file:
file.managed:
- name: /tmp/script-stdout.txt
- contents: {{ script_res.stdout }}
The output is already going to the master. It would be better to actually output in json and query down to the data you want in your document on the master.
such as the following
Normal output
$ sudo salt salt00\* state.apply tests.test3
salt00.wolfnet.bad4.us:
----------
ID: test_run
Function: cmd.run
Name: echo test
Result: True
Comment: Command "echo test" run
Started: 10:39:51.103057
Duration: 18.281 ms
Changes:
----------
pid:
8661
retcode:
0
stderr:
stdout:
test
Summary for salt00.wolfnet.bad4.us
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 18.281 ms
json output
$ sudo salt salt00\* state.apply tests.test3 --out json
{
"salt00.wolfnet.bad4.us": {
"cmd_|-test_run_|-echo test_|-run": {
"name": "echo test",
"changes": {
"pid": 9057,
"retcode": 0,
"stdout": "test",
"stderr": ""
},
"result": true,
"comment": "Command \"echo test\" run",
"__sls__": "tests.test3",
"__run_num__": 0,
"start_time": "10:40:55.582273",
"duration": 19.374,
"__id__": "test_run"
}
}
}
json parsed down with jq to just the stdout
$ sudo salt salt00\* state.apply tests.test3 --out=json | jq '.|.[]|."cmd_|-test_run_|-echo test_|-run"|.changes.stdout'
"test"
Also, for the record it is considered bad practice to put code that changes the system into jinja. Jinja always runs when a template is rendered and there is no way to control if it happens so just running test=true tests will still run the jinja code that makes changes which could be very harmful to your systems.

How to configure environment variables in Azure DevOps pipeline?

I have an Azure Function (.NET Core) that is configured to read application settings from both a JSON file and environment variables:
var configurationBuilder = new ConfigurationBuilder()
.SetBasePath(_baseConfigurationPath)
.AddJsonFile("appsettings.json", optional: true)
.AddEnvironmentVariables()
.Build();
BuildAgentMonitorConfiguration configuration = configurationBuilder.Get<BuildAgentMonitorConfiguration>();
appsettings.json has the following structure:
{
"ProjectBaseUrl": "https://my-project.visualstudio.com/",
"ProjectName": "my-project",
"AzureDevOpsPac": ".....",
"SubscriptionId": "...",
"AgentPool": {
"PoolId": 38,
"PoolName": "MyPool",
"MinimumAgentCount": 2,
"MaximumAgentCount": 10
},
"ContainerRegistry": {
"Username": "mycontainer",
"LoginServer": "mycontainer.azurecr.io",
"Password": "..."
},
"ActiveDirectory": {
"ClientId": "...",
"TenantId": "...",
"ClientSecret": "..."
}
}
Some of these settings are configured as environment variables in the Azure Function. Everything works as expected:
The problem now is to configure some of these variables in a build pipeline, which are used in unit and integration tests. I've tried adding a variable group as follows and linking it to the pipeline:
But the environment variables are not being set and the tests are failing. What am I missing here?
I also have the same use case in which I want some environment variable to be set up using the azure build pipeline so that the test cases can access that environment variable to get the test passed.
Directly setting the env variable using the EXPORT,ENV command does not work for the subsequent task so to have the environment variable set up for subsequent task follow the syntax as mentioned on https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch
ie the
task.set variable with the script tag
Correct way of setting ENV variable using build pipeline
- script: |
echo '##vso[task.setvariable variable=LD_LIBRARY_PATH]$(Build.SourcesDirectory)/src/Projectname/bin/Release/netcoreapp2.0/x64'
displayName: set environment variable for subsequent steps
Please be careful of the spaces as its is yaml. The above script tags set up the variable LD_LIBRARY_PATH (used in Linux to define path for .so files) to the directory defined.
This style of setting the environment variable works for subsequent task also , but if we set the env variable like mentioned below the enviroment variable will be set for the specefic shell instance and will not be applicable for subsequent tasks
Wrong way of setting env variable :
- script: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(Build.SourcesDirectory)/src/CorrectionLoop.HttpApi/bin/Release/netcoreapp2.0/x64
displayName: Set environment variable
You can use the similar syntax for the setting up your environment variable.
I ran into this as well when generating EF SQL scripts from a build task. According to the docs, the variables you define in the "Variables" tab are also provided to the process as environment variables.
Notice that variables are also made available to scripts through environment variables. The syntax for using these environment variables depends on the scripting language. Name is upper-cased, . replaced with _, and automatically inserted into the process environment
For my instance, I just had to load a connection string, but deal with the case difference of the key between the json file and the environment:
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddEnvironmentVariables()
.Build();
var connectionString = config["connectionString"] ?? config["CONNECTIONSTRING"];
If you are using bash then their example does not work, as they are referring incorrectly to the variables in the documentation. Instead it should be:
Store secret
#!/bin/bash
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"
echo "##vso[task.setvariable variable=secret.Sauce;issecret=true]crushed tomatoes with garlic"
Retrieve secret
Wrong: Their example
#!/bin/bash
echo "No problem reading $1 or $SAUCE"
echo "But I cannot read $SECRET_SAUCE"
echo "But I can read $2 (but the log is redacted so I do not spoil the secret)"
Right:
#!/bin/bash
echo "No problem reading $(sauce)"
echo "But I cannot read $(secret.Sauce)"
Configure KeyVault Secrets in Variable Groups for FileTransform#1
The below will read KeyVault Secrets used in a Variable group and add them to the Environments Variables for FileTransform#1 to use
setup KeyVault
Create Variable Group and import the values you want to use for the Pipeline.
In this example we used:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
Setup names to match keyVault names: (you can pass these into the yml steps template)
#These parameters are here to support Library > Variable Groups > with "secrets" from a KeyVault
#KeyVault keys cannot contain "_" or "." as FileTransform1# wants
#This script takes "--" keys and replaces them with "." and adds them into "env:" variables so Transform can do it's thing.
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default:
- ConnectionStrings--Context
- Cloud--AuthSecret
- Compumail--ApiPassword
stages:
- template: ./locationOfTemplate.yml
parameters:
apiSecretKeys: ${{ parameters.apiSecretKeys }}
... build api - publish to .zip file
Setup Variable groups on "job level"
variables:
#next line here for pipeline validation purposes..
- ${{if parameters.variableGroup}}:
- group: ${{parameters.variableGroup}}
#OR
#- VariableGroupNameContinaingSecrets
Template file: (the magic)
parameters:
- name: apiSecretKeys
displayName: apiSecretKeys
type: object
default: []
steps:
- ${{if parameters.apiSecretKeys}}:
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
displayName: List Env Vars Before
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$writeKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "value :" "$(${{key}})"
Write-Host "writeKey:" $writeKey
Write-Host "##vso[task.setvariable variable=$writeKey]$(${{key}})"
displayName: Writing Dashes To LowDash for ${{key}}
- ${{ each key in parameters.apiSecretKeys }}:
- powershell: |
$readKey = "${{key}}".Replace('--','.')
Write-Host "oldKey :" ${{key}}
Write-Host "oldValue:" "$(${{key}})"
Write-Host "readKey :" $readKey
Write-Host "newValue:" ('$env:'+"$($readKey)" | Invoke-Expression)
displayName: Read From New Env Var for ${{key}}
- powershell: |
$envs = Get-childItem env:
$envs | Format-List
name: List Env Vars After adding secrets to env
Run Task - task: FileTransform#1
Enjoy ;)

Upload Image on TryStack Server using Packer tool

I am trying to create and upload an ubuntu based image on trystack server using packer tool. I am using Windows OS to do it. I have created a sample template and loads a script file for setting environment variables using chef. But when I am running the packer build command I get
1 error(s) occurred:
* Get /: unsupported protocol scheme ""
What am I missing in this ??
Here are the template and script files
template.json
{
"builders": [
{
"type": "openstack",
"ssh_username": "root",
"image_name": "sensor-cloud",
"source_image": "66a14661-2dfb-4370-b6d4-87aaefcffdce",
"flavor": "3",
"availability_zone": "nova",
"security_groups": ["mySecurityGroup"]
}
],
"provisioners": [
{
"type": "file",
"source": "sensorCloudCookbook.zip",
"destination": "/tmp/sensorCloudCookbook.zip"
},
{
"type": "shell",
"inline": [
"curl -L https://www.opscode.com/chef/install.sh | bash"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"unzip /tmp/sensorCloudCookbook.zip -d /tmp/sensorCloudCookbook"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"chef-solo -c /tmp/sensorCloudCookbook/solo.rb -l info -L /tmp/sensorCloudLogs.txt"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
}
]
}
openstack-config.sh
#!/bin/bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other
# OpenStack API is version 2.0. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://128.136.179.2:5000/v2.0
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=trystack_tenant_id
export OS_TENANT_NAME="trystack_tenant_name"
export OS_PROJECT_NAME="trystack_project_name"
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="same_as_trystack_tenant_name"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
You need to source openstack-config.sh before packer build.

Apigee Command Line import returns 500 with NullPointerException

I'm trying to customise the deploy scripts to allow me to deploy each of my four API proxies from the command line. It looks very similar to the one provided in the samples on Github:
#!/bin/bash
if [[ $# -eq 0 ]] ; then
echo 'Must provide proxy name.'
exit 0
fi
dirname=$1
proxyname="teamname-"$dirname
source ./setup/setenv.sh
echo "Enter your password for user $username in the Apigee Enterprise organization $org, followed by [ENTER]:"
read -s password
echo Deploying $proxyname to $env on $url using $username and $org
./tools/deploy.py -n $proxyname -u $username:$password -o $org -h $url -e $env -p / -d ./$dirname
echo "If 'State: deployed', then your API Proxy is ready to be invoked."
echo "Run '$ sh invoke.sh'"
echo "If you get errors, make sure you have set the proper account settings in /setup/setenv.sh"
However when I run it, I get the following response:
Deploying teamname-gameassets to int on https://api.enterprise.apigee.com using my-email-address and org-name
Writing ./gameassets/teamname-gameassets.xml to ./teamname-gameassets.xml
Writing ./gameassets/policies/Add-CORS.xml to policies/Add-CORS.xml
Writing ./gameassets/proxies/default.xml to proxies/default.xml
Writing ./gameassets/targets/development.xml to targets/development.xml
Writing ./gameassets/targets/production.xml to targets/production.xml
Import failed to /v1/organizations/org-name/apis?action=import&name=teamname-gameassets with status 500:
{
"code" : "messaging.config.beans.ImportFailed",
"message" : "Failed to import the bundle : java.lang.NullPointerException",
"contexts" : [ ],
"cause" : {
"contexts" : [ ]
}
}
How should I go about debugging when I receive errors during the deploy process? Is there some sort of console I can view once logged in to Apigee?
I'm not sure how your proxy ended up this way, but it looks like the top-level directory is named "gameassets." It should be named "apiproxy". If you rename this directory you should see a successful deployment.
Also, before you customize too much, please try out "apigeetool," which is a more flexible command-line tool for deploying proxies:
https://github.com/apigee/api-platform-tools

Resources