scollector - tagging metrics from vsphere - bosun

just a question about scollector tagging. I have a config file that looks like this:
Host = "bosun01:80"
BatchSize = 5000
[Tags]
customer = "Admin"
environment = "bosun"
datacenter = "SITE1"
[[Vsphere]]
Host = "CUST2SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST3SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST4SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST4SITE2VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
CollectorExpr = "Vsphere"
[TagOverride.MatchedTags]
Host = '^(?P<customer>.{5})(?P<datacenter>.{5})(?P<environment>)\.[.]+'
with the idea being that we can retrieve and tag data from different vsphere servers.
My understanding of the docs is that this will give us a number of different tag values based what is regex'd out of the Vsphere hostname. The initial tags are for the local host, and the we use overrides for the data coming from Vsphere.
However when i implement this, I notice that these metrics are coming in with the original environment tag of "bosun" rather than the override being applied.
I have tried an alternate config:
Host = "bosun01:80"
BatchSize = 5000
[Tags]
customer = "Admin"
environment = "bosun"
datacenter = "SITE1"
[[Vsphere]]
Host = "CUST2SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env01"
[[Vsphere]]
Host = "CUST3SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env02"
[[Vsphere]]
Host = "CUST4SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env03"
[[Vsphere]]
Host = "CUST4SITE2VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env04"
But i am seeing similar behavior (the last environment tag is applied to all vpshere data), so im not quite sure where i am going wrong.
Can someone help me understand where i am going wrong here ?
Update
As per Greg's answer below, my problem was that i didnt have the CollectorExpr quite right.
Using scollector -l i was able to come up with the correct CollectorExpr.
# ./scollector-linux-amd64 -l | grep vsphere
vsphere-CUST1-SITE1-MGMTVC01
vsphere-CUST1-SITE2-MGMTVC01
vsphere-CUST1-SITE1-CLIVC01
vsphere-CUST1-SITE2-CLIVC01
#
Our config (for those looking for examples) ended up something like this:
Host = "hwbosun01:80"
BatchSize = 5000
[Tags]
customer = "Customer1"
environment = "bosun"
datacenter = "eq"
[[Vsphere]]
Host = "CUST1-SITE1-MGMTVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST1-SITE2-MGMTVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST1-SITE1-CLIVVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST-SITE1-CLIVVC01"
User = "user"
Password = "pass"
[[TagOverride]]
CollectorExpr = "CUST-SITE1-MGMTVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site1'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-MGMTVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site2'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-CLIVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site1'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-CLIVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site2'

I believe CollectorExpr is a regular expression that must match against the output of scollector -l or the collector tag values used in the scollector.collector.duration metric. Our vsphere instances get the tag values of vsphere-ny-vsphere02 for ny-vsphere02 and vsphere-nyhq-vsphere01 for nyhq-vsphere01. The following settings should match against those collector names:
[[TagOverride]]
CollectorExpr = "vsphere-ny-"
[TagOverride.Tags]
datacenter = 'ny'
[[TagOverride]]
CollectorExpr = "vsphere-nyhq-"
[TagOverride.Tags]
datacenter = 'nyhq'
Using [TagOverride.MatchedTags] instead of [TagOverride.Tags] should work to extract the value out of the hostname, but keep in mind that all the hostnames are truncated to their shortname (no FQDN) unless you set FullHost = true in the scollector.toml file. My guess is your settings are failing because the CollectorExpr is incorrect. Try something like:
[[TagOverride]]
CollectorExpr = "vsphere-"
[TagOverride.MatchedTags]
Host = '^(?P<customer>.{5})(?P<datacenter>.{5})(?P<environment>[^.]+)'
If that doesn't work try using '[TagOverride.Tags]' in a dev environment to see if you can add test tags/values to those metrics.

Related

APIs not getting deployed in one gateway in latest API Manager(wso2am-4.0.0)

I'm using the latest API Manager(wso2am-4.0.0) and I am trying to implement one control plane node(which also acts as a separate gateway node), and also a gateway worker node as per the documentation.
https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/deploy-api/deploy-through-multiple-api-gateways/
Currently, the APIs are getting deployed to the gateway worker node as expected(when accessing via port 8243) but not getting deployed to the control plane node, which results in a 404 not found when invoking the API via the control plane node(when accessing via port 8244).
The relevant deployment.conf configurations for both the nodes are as below.
Control Plane
[[apim.gateway.environment]]
name = "Production Gateway"
type = "production"
display_in_api_console = true
description = "Production Gateway Environment"
show_as_token_endpoint_url = true
service_url = "https://localhost:9443/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9099"
wss_endpoint = "wss://localhost:8099"
http_endpoint = "http://localhost:8280"
https_endpoint = "https://localhost:8243"
[[apim.gateway.environment]]
name = "Default"
type = "hybrid"
display_in_api_console = true
description = "External Gateway Environment"
show_as_token_endpoint_url = true
service_url = "https://localhost:9444/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9100"
wss_endpoint = "wss://localhost:8100"
http_endpoint = "http://localhost:8281"
https_endpoint = "https://localhost:8244"
Gateway Worker
[apim.key_manager]
service_url = "https://<hostname>:9443/services/"
username = "$ref{super_admin.username}"
password = "$ref{super_admin.password}"
[apim.throttling]
service_url = "https://<hostname>:9443/services/"
throttle_decision_endpoints = ["tcp://<hostname>:5672"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<hostname>:9611"]
traffic_manager_auth_urls = ["ssl://<hostname>:9711"]
[apim.sync_runtime_artifacts.gateway]
gateway_labels =["Default","Production Gateway"]
Note: I started these servers using the switches -Dprofile=control-plane and -Dprofile=gateway-worker respectively.
Am I missing any configuration here?
Thanks in advance.

R Oracle DB connection fails with dbPool but succeeds with dbConnect

I'm attempting to refactor older code to make use of DB pools using the pool package's dbPool function.
Historically I've been using the DBI package's dbConnect function without issue. I can successfully create a connection to my Oracle database with the below code (all credentials are faked):
conn <- DBI::dbConnect(
ROracle::Oracle(),
"database.abcd1234.us-east-1.rds.amazonaws.com/orcl",
username="username",
password="hunter2"
)
However, when I use the same credentials in the same development environment to attempt to create a pool like this:
pool <- pool::dbPool(
drv = ROracle::Oracle(),
dbname = "orcl",
host = "database.abcd1234.us-east-1.rds.amazonaws.com",
username = "username",
password = "hunter2"
)
I get an error:
Error in .oci.Connect(.oci.drv(), username = username, password = password, :
ORA-12162: TNS:net service name is incorrectly specified
I've used dbPool before but with Postgres databases instead of Oracle, and for Postgres, it just worked! I'm thinking that because my credentials work fine for dbConnect, there must be some small thing I'm missing that's needed for dbPool to work correctly too
orcl is the service name, not the database name.
Try:
pool <- pool::dbPool(
drv = ROracle::Oracle(),
host = "database.abcd1234.us-east-1.rds.amazonaws.com/orcl",
username = "username",
password = "hunter2"
)
Or
pool <- pool::dbPool(
drv = ROracle::Oracle(),
sid = "orcl",
host = "database.abcd1234.us-east-1.rds.amazonaws.com",
username = "username",
password = "hunter2"
)

RabbitMQ SSL Configuration: DotNet Client

I am trying to connect (dotnet client) to RabbitMQ. I enabled the Peer verification option from the RabbitMQ config file.
_factory = new ConnectionFactory
{
HostName = Endpoint,
UserName = Username,
Password = Password,
Port = 5671,
VirtualHost = "/",
AutomaticRecoveryEnabled = true
};
sslOption = new SslOption
{
Version = SslProtocols.Tls12,
Enabled = true,
AcceptablePolicyErrors = System.Net.Security.SslPolicyErrors.RemoteCertificateChainErrors
| System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch,
ServerName = "", // ?
Certs = X509CertCollection
}
Below are my client certification details which I am passing through "X509CertCollection".
CertSubject: CN=myhostname, O=MyOrganizationName, C=US // myhostname is the name of my client host.
So, if I pass "myhostname" value into sslOption.ServerName, it works. If I pass some garbage value, it still works.
As per documentation of RabbitMQ, these two value should be match i.e. certCN value and serverName. What will be the value of sslOption.ServerName here and why?
My Bad. I found the reason. Posting as it might help someone.
Reason: As I set a policy "System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch".

ssl connection for RJDBC

My company is instituting an ssl requirement soon for database connections.
I previously connected to our Vertica database via DBI and RJDBC packages. I have tried adding an sslmode='require' parameter to my connection. But adding this parameter has no effect. I can still connect to the database but the connection is not ssl.
Can anyone advise on how to enable ssl connection for DBI? In PyCharm I merely had to set ssl to true in the driver properties.
DBI::dbConnect(
drv = RJDBC::JDBC(
driverClass = driver_class,
classPath = class_path
),
url = url,
UID = user_id,
PWD = password,
sslmode = 'require'
)
}
A different ssl parameter was required. I am having connection success with the function below that uses ssl = 'true'
DBI::dbConnect(
drv = RJDBC::JDBC(
driverClass = driver_class,
classPath = class_path
),
url = url,
UID = user_id,
PWD = password,
ssl = 'true'
)

update exisitng terraform compute instance when added new "components"

I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.

Resources