Dynamically updating Telegraf config with agent hosts? - telegraf

I have a number of network switches in my infrastructure and I have been using telegraf to collect data traffic information from switches with snmp. So far switch IP addresses were added to the config statically. I was wondering if it is possible to call IP list from a database so I don't need to add it every time statically into the config? Or maybe telegraf is just not the tool for that.
Usual config looks like this:
[[inputs.snmp]]
agents = [ "192.168.252.15:161" ]
version = 2
community = "public"
name = "snmp"
[[inputs.snmp.field]]
name = "hostname"
oid = "RFC1213-MIB::sysName.0"
is_tag = true
[[inputs.snmp.table]]
name = "snmp"
inherit_tags = [ "hostname" ]
oid = "IF-MIB::ifXTable"
[[inputs.snmp.table.field]]
name = "ifName"
oid = "IF-MIB::ifName"
is_tag = true

I would just write a script that would modify the telegraf.conf file as you need. I do something similar for auto-scaled servers in AWS... So I have a bash script that cloud-init runs when instances are created from an image which has most of my telegraf config already in it, and then this script modifies it so the hostname is modified with the new IP address.
So ultimately, I think you just need a script that runs upon creation or whatever the scenario is for your needs. My two cents anyway...

Related

Terraform Openstack attach pre-allocated Floating IPs to instance

I have a use case where I need to re-use detached Floating IPs. Is there a way to do this in Terraform? I've tried:
`
data "openstack_networking_floatingip_v2" "fips" {
status = "DOWN"
}
`
to get a list of detached IPs, but I get an error saying there is more than one Floating IP (Which is true).
Is there a good way to get detached floating IPs as a data resource in terraform? The alternative is passing an array of available IPs via a wrapper script with the command outlined here: Reuse detached floating IPs in OpenStack
For anyone else that comes across this, here is I solved it for now:
I used the 'external' data resource to call the openstack cli to retrieve a comma seperated list of available ips. The openstack cli command looks like this:
openstack floating ip list --status DOWN -f yaml -c "Floating IP Address"
In order to get the output in the format suitable for terraform's external data resource, I used a python script. The script outputs a json object that looks like this: {ips = "ip.1.2.3,ip.4.5.6,ip.7.8.9"}
The external data resource in terraform looks like this:
data "external" ips {
program = ["python", "<path-to-python-script>"]
}
From there I'm able to split the comma seperated string of ips in terraform and access the the ips as an array:
output available_ips {
value = split(",", data.external.ips.result.ips)
}
It's definitely not elegant, I wish the openstack_networking_floatingip_v2 data resource would allow for this functionality, I'll look into opening an issue to get it added.

While configuring BPS DB in wso2 is 5.9.0 , which scripts do i have to import in MySQL?

I am following this document-https://is.docs.wso2.com/en/5.9.0/setup/changing-datasource-bpsds/
deployment.toml Configurations.
[bps_database.config]
url = "jdbc:mysql://localhost:3306/IAMtest?useSSL=false"
username = "root"
password = "root"
driver = "com.mysql.jdbc.Driver"
Executing database scripts.
Navigate to <IS-HOME>/dbscripts. Execute the scripts in the following files, against the database created.
<IS-HOME>/dbscripts/bps/bpel/create/mysql.sql
<IS-HOME>/dbscripts/bps/bpel/drop/mysql-drop.sql
<IS-HOME>/dbscripts/bps/bpel/truncate/mysql-truncate.sql
Now create/mysql.sql creates table and the rest two file are responsible for deleting and trucating the same table..............what do i do?????????
Can anyone also tell the use case of BPS datasource??????
Please Help...........
You should only change your bps database if you have a requirement of using the workflow feature[1] in the wso2 identity server. It is mentioned in this documentation https://is.docs.wso2.com/en/5.9.0/setup/changing-to-mysql/
The document supposed to menstion the related db script. But it seems like mis leading. As it has requested to execute all three scripts. if you are using the workflow feature just use the
/dbscripts/bps/bpel/create/mysql.sql
script to create tables in you mysql database.
[1]. https://is.docs.wso2.com/en/5.9.0/learn/workflow-management/

Kannel: get sms status from database

I'm making a symfony application that stores a huge amount of SMS in the database and Kannel detectes these Sms and sends , I'm using the sqlbox for sure, the problem that Kannel notifies our symfony app about an sms throug the dlr-url which is causing alot of memory usage of apache, cause for every Sms we got about 3 http request from the dlr to update the sms so for 100k sms we get 300k request and in each request we update the database...
So what I'm thinking of is that why not Kannel update the sms status in the database directly without calling the dlr url... is it possible ?
From my understanding, your tests are based on the following configuration:
sqlbox to send messages (through insert in send_sms table
dlr_url set in your configuration to get delivery reports
no custom dlr-storage
How to keep DLR without using additional http calls
There is already a tool to get automatically DLR into a database: that is the interest of dlr-storage
In Kannel documentation, you will see that this field has several possibilities:
Supported types are: internal, spool, mysql, pgsql, sdb, mssql,
sqlite3, oracle and redis. By default this is set to internal.
From my experience, when using a database dlr-storage, the delivery reports (DLR) are only kept in datatable while the delivered status has not been received, then they are automatically deleted.
So if you wish to keep some logs about the sent items, you need to edit some files (gw/dlr_mysql.c and gw/dlr.c) to avoid this delete.
Configuration of the dlr-strorage
Here I will provide an example with MySql.
Sample of additional configuration in kannel.conf file:
# this line must be in the "core" group
dlr-storage = mysql
#---------------------------------------------
# DLR STORAGE
#
#
group = mysql-connection
id = mydlr
host = localhost
username = *yourMySqlUserName*
password = *yourMySqlPass*
database = *yourMySqlDatabaseWithTheDlrTable*
max-connections = 1
# Group defining where are the data in the db (table, columns)
group = dlr-db
id = mydlr
table = dlr
field-smsc = smsc
field-timestamp = ts
field-destination = destination
field-source = source
field-service = service
field-url = url
field-mask = mask
field-status = status
field-boxc-id = boxc

cloudmonkey with crontab?

i have a Cloudstack 4.2.1 here and would like my VMs to boot from time and shutdown at a scheduled time.
Hence i was thinking if i could integrate Cloudmonkey with CronTab together.
Firstly by creating a Cloudmonkey Script or API call then using crontab to run it at a specific time.
However i have problems creating a Cloudmonkey script/API call...
i haved googled and found this link
http://dlafferty.blogspot.sg/2013/07/using-cloudmonkey-to-automate.html
and had a result of
apiresult=cloudmonkey api stop virtualmachine id="'e10bdf21-2d5c-4277-9d8d-791b82b9e3be'"
unfortunately when i entered this command, nothing happened. If anyone could have an alternative suggestion or rather my API call command is wrong, please correct me and help
Thank you.
CloudMonkey requires some setup before it works (e.g. setting your API key).
Check [1] for the documentation for CloudMoney and follow through the Usage section to setup your environment.
Once your setup is complete and you can interact with CloudStack via CloudMonkey, you should take into account that the VM ids might change, so before you issue a command for a VM, you should first find the correct id, by listing the VMs and picking the right one.
Also, if you run into trouble, post the relevant log from CLoudStack management server (typically in /var/log/cloudstack/management/management-server.log).
[1] - https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI
Edit: If you have a working connection via CloudMonkey to CloudStack, you need to configure CloudMonkey in the same way in your shell script. For instance when you configured CloudMonkey you probably set a host, a port and your api and secret keys. So for your scrip to work you need to provide the same configuration to CloudMonkey prior to issuing the commands. My best guess is to use the -c option and provide a config file to set all the relevant parameters (e.g. api an secret key). cloudmonkey -c CONFIG_FILE ....
Edit2: You don't actually need to re-configure cloudmonkey in your script because it will remember your config from the interactive session. I would still advise you to do it, because your script gets more reliable. I've just made an example script like this:
#! /bin/bash
result=$(cloudmonkey list users)
echo $result
Result:
> ./tmp.sh
count = 1 user: id = 678e3a24-082c-11e4-86de-acbdb2423647 account = admin accountid = 678dffe6-082c-11e4-86de-acbdb2423647 accounttype = 1 apikey = T6sDBIpytyJ4_PMgNXYi8YgjMtwTiiDjijbXNB1J78EAZq2foKhCoGKjgJnej5tMaHM0LUvejgTddkhVU63wdw created = 2014-07-10T16:19:13+0200 domain = ROOT domainid = 678dd7b4-082c-11e4-86de-acbdb2423647 email = admin#mailprovider.com firstname = Admin iscallerchilddomain = False isdefault = True lastname = User secretkey = dzOPRecI5vvEVK7Vie2D0tDsQGXunUnpIAczbXnPI3sfMwQ-upWL_bPOisEYg4C-nXi-ldQno2KVZbVR-5NmVw state = enabled username = admin
Maybe you forgot to echothe result?

Biztalk File send port with a variable path

Is it possible to make the send port change output location based on a promoted property?
We have an interface that needs to send it to a different port based on the client. But we add clients on a regular basis, so adding a new send port (both in the administrator and orchestration) will require a lot of maintenance, while the only thing that happens is a directory change
The folders are like this ...
\\server\SO\client1\Out
\\server\SO\client2\Out
\\server\SO\client3\Out
I tried using the SourceFilename to create a file name like client1\Out\filename.xml but this doesn't work.
Is there any way to do this with a single send port?
It is possible to set the OutboundTransportLocation property in context. This property contains the full path/name of the file that will be output by the file adapter. So in your case I guess you could do something along the line (if it had to be done in a pipeline component):
message.Context.Write(
OutboundTransportLocation.Name,
OutboundTransportLocation.Namespace,
string.format(#"\\server\SO\{0}\Out", client));
Of course you can do a similar thing in your orchestration.
No need of a dynamic port...

Resources