Search for hostnames in kibana - kibana

I've tried searching for hostnames in kibana using part of the server name:
B-wit-a2pgw-*
Also I tried:
hostname: B-wit-a2pgw-*
And:
instance: B-wit-a2pgw-*
I have the time frame set to today. But nothing turns up in the kibana console. What am I doing wrong?

you can search it as servername:B*wit*a2pgw*, or if the field name is hostname then
hostname:B*wit*a2pgw*

If your hostname field is already analyzed you can use this:
hostname.raw: B-wit-a2pgw-*
In case isn't already analyzed you can use this:
hostname: B-wit-a2pgw-*

Related

Configure grafana-loki output plugin

i'm trying to use the grafana-loki output plugin in fluent-bit but it seems impossible to configure with tls.
i had a working configuration running with the loki plugin like this :
[OUTPUT]
Name loki
Match *
Host my-collector-url-for-loki
Port 443
Http_User m-user
Http_Passwd some-token-value
Labels job=fluentbit
auto_kubernetes_labels on
Tls On
Tls.verify On
but the problem with this output plugin was that the logs are not showing correctly on grafana, i think a filter or parser needs to be configured for it or maybe the plugin is just meant for loki not grafana/loki, i just don't know and i got tired of trying to figure out why. So i switched to the grafana-loki plugin and the logs looked perfect on grafana but i only had it working without authentication.
this is my setup with grafana-loki output plugin
[Output]
Name grafana-loki
Match *
Url https://url-to-my-logs-collector
TenantID ""
BatchWait 1
BatchSize 1048576
Labels {job="test-fluent-bit"}
RemoveKeys kubernetes,stream
AutoKubernetesLabels false
LabelMapPath /fluent-bit/etc/labelmap.json
LineFormat json
LogLevel warn
# everything prior to this line is working successfully
# trying to set authentication here "this part doesn't work"
Tls On
Tls.verify On
Http_User m-user
Http_Passwd some-token-value
Problem with this setup, i always get a 403 forbidden http status. I'm having trouble figuring out how to set authentication on this plugin. Does anyone have a working configuration for this type of setup?
Authentication worked for me using this plugin using like below:
[Output]
Name grafana-loki
Match *
Url https://${user_loki}:${pass_loki}#lurl-to-my-logs-collector
BatchWait 1s
BatchSize 102400
TenantID ""
(...)
TLS, http.user and http.passwd options are not support, as far as I could understand, by this plugin.

SaltStack - mine.get is able to grab mine_function data from master, but not in .sls or jinja variable

I hope you can help me with a rather frustrating issue I have been having. I have been trying to remove static config from some config files and move this to Pillar/Mine data using Salt-Stack.
Everything is going well, with the exception of 1 specific task.
This is grabbing data (custom grain) from 3 specific minions to make 3 different variables in an .sls (context) or a jinja file (direct variable) on other minions, but I cannot seem to get it to work.
(My scenario is flexible as I can call this in either a state file or jinja variable in a config file.)
This is on AWS EC2 instances, but can be replicated away from AWS in my lab. The grain I need is: "public_ipv4" and the reason I cannot use the network.util in salt runner is because this is NAT'd and the box doesn't have a 2nd interface with the public IP assigned to it. (This cannot be changed)
Pillar data works and I have a init.sls for the mine function:
mine_functions:
grains.item:
- location
- environment
- roles
- srvtype
- instance
- az
- public_ipv4
- fqdn
- ipv4
- ipv6
(Also the custom grain: "public_ipv4" works being called by the minion so I know it is the not the grains themselves being incorrect.)
When targeting via the master using the below it brings back the requested information:
my-minion:
----------
minion-with-data-i-want-1:
----------
az:
c
environment:
dev
fqdn:
correct_fqdn
instance:
3
ipv4:
- Correct_local_ip
- 127.0.0.1
ipv6:
- ::1
- Correct_ip
location:
correct_location
public_ipv4:
Correct_public_ip
roles:
Correct_role
srvtype:
None
It is key to note here that the above comes from:
salt '*globbed_target*' mine.get '*minions-with-data-i-need-glob*' grains.item
This is from the master, but I cannot single out a specific grain by using indexing or any args/kwargs etc.
So I put some syntax into a state file and some jinja templates and I cannot get it to work. Here are a few I have tried so far:
Jinja:
{% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] %}
Above returns nothing.
State file:
- context:
- ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') }}
The above returns a dict error:
Context must be formed as a dict
Running latest salt-minion/master from apt.
Steps I have taken:
Running: salt '*' mine.update after every change and checking with: salt '*' mine.valid after every change and they show.
Any help is appreciated.
This looks like you are running into a classic problem. Not knowing what you are getting as the return value.
first your {# set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] #} returns nothing because it is a jinja comment. {% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item') %}
the next problem you have is that you are passing a list to context. when it is supposed to take a dict. the error isn't even related to mine.
try this instead
- context:
ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') | json}}
next learn to use slsutil.renderer to look at how things are rendered. such as salt minion slsutil.renderer salt://thing/init.sls default_renderer=jinja

Euca 5.0 Ansible Console Task Failing

Background:
I am only able to get past the ansible console install/config tasks by adding --region localhost to anywhere in: /usr/share/eucalyptus-ansible/roles/cloud-post/tasks/console.yml wherever it calls tools that take that argument.
Otherwise each sub task fails like this: ["euca-describe-images: error: connection error (('Connection aborted.', gaierror(-2, 'Name or service not known')))"]
Running the commands from that playbook directly on the euca server being configured gives the same result unless I specify --region localhost
Problem:
I'm stuck here: [cloud-post : update console route53 system domain for eucalyptus-cloud authentication]
Error: "euform-update-stack: error (ValidationError): No updates are to be performed.", "stderr_lines": ["euform-update-stack: error (ValidationError): No updates are to be performed."]
All services are running except the ImagingBackend is Not Ready
No instances are running according to euca-describe-instances
Images are available:
IMAGE ami-5be483c81cf8bd65c eucalyptus-console-image-5-0-823/eucalyptus-console-image-5-0-823.raw.manifest.xml 000216594841 available private x86_64 machine instance-store hvm
TAG image ami-5be483c81cf8bd65c type eucalyptus-console-image
TAG image ami-5be483c81cf8bd65c version 5.0.823
IMAGE ami-f31092ddb73e29af9 eucalyptus-service-image-v5.0.100/eucalyptus-service-image.raw.manifest.xml 000216594841 available privatx86_64 machine instance-store hvm
TAG image ami-f31092ddb73e29af9 provides imaging,loadbalancing
TAG image ami-f31092ddb73e29af9 type eucalyptus-service-image
TAG image ami-f31092ddb73e29af9 version 5.0.100
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
node:
hosts:
exp-enc-[01:02].lan.com:
EDIT:
Solved. Details are in the comments of the marked answer.
The name error most likely means that DNS for the domain cloud.lan.com is not being correctly delegated to your deployment. To test this, check if the nameserver is found:
dig +short NS cloud.lan.com
you should see "ns1.cloud.lan.com" and then should be able to use that nameserver to resolve services, e.g.
dig +short ec2.cloud.lan.com #ns1.cloud.lan.com
which should be the IP of the host for the compute service.
The second item is a bug in the ansible playbook that occurs when the stack is already present and up to date. To work around it, you can either update your playbook or delete the stack before running the playbook. Depending on how far the playbook progressed you may have a script to do this:
/usr/local/bin/console-manage-stack -a delete
the related playbook change is https://github.com/AppScale/ats-deploy/pull/36

Problems getting Symfony to work locally with the `host: "web.{domain}"` setting

I am trying to get an old project (not made by me) up and running, and I see that the routes are configured in some peculiar format. This is a typical route config:
customer_home:
path: /customer
host: "web.{domain}"
defaults:
_controller: "BackendBundle:Customer:index"
domain: "%domain%"
methods: [get]
options:
expose: true
requirements:
domain: '%domain%'
Now, I grepped the source code and found domain in the config files. It was null by default and by setting it to localhost:8000 I was able to at least load the root without complaints about %domain%. Now it complains about not finding a matching route, which makes sense, as none was configured. What was configured (which I found by doing console debug:router) was a root route for admin.{domain} and web.{domain}. I assume this means that if the domain is myapp.com, there should be routes configured for admin.myapp.com/ and web.myapp.com.
This is a local development setup, running on 127.0.0.1:8000, so I tried adding this to /etc/hosts:
127.0.0.1 web.localhost admin.localhost
I was now hoping that going to web.localhost:8000 would load a route, but none was matched. I still get NotFoundHttpException, but now I no longer understand why ... How can I configure this setup so that I can load the web and admin subdomains on my development machine? Other routes, like /api/1/doc, works fine.
I was not far off. The answer was to simply drop the port portion of what I had entered as the domain setting. So domain: localhost did the trick. The server is by default running on port 8000, no matter the setting, so it was not needed. I can now access web.localhost and admin.localhost (after adding them as host aliases for the loopback device in /etc/hosts).

cert manager is failing with Waiting for dns-01 challenge propagation: Could not determine authoritative nameservers

I have created cert-manager on aks-engine using below command
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
my certificate spec
issuer spec
Im using nginx as ingress, I could see txt record in the azure dns zone created my azuredns service principle, but not sure what is the issue on nameservers
I ran into the same error... I suspect that it's because I'm using a mix of private and public Azure DNS entries and the record needs to get added to the public entry so letsencrypt can see it, however, cert-manager performs a check that the TXT record is visible before asking letsencrypt to perform the validation... I assume that the default DNS records cert-manager looks at is the private one, and because there's no TXT record there, it gets stuck on this error.
The way around it, as described on cert-manager.io is to override the default DNS using extraArgs (I'm doing this with terraform and helm):
resource "helm_release" "cert_manager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
set {
name = "installCRDs"
value = "true"
}
set {
name = "extraArgs"
value = "{--dns01-recursive-nameservers-only,--dns01-recursive-nameservers=8.8.8.8:53\\,1.1.1.1:53}"
}
}
The issue for me, was that I was missing some annotations in the ingress:
cert-manager.io/cluster-issuer: hydrantid
kubernetes.io/tls-acme: 'true'
In my case I am using hydrantid as the issuer, but most people use letsencrypt I guess.
I had similar error when my certificate was stuck in pending and below is how i resolved it
kubectl get challenges
urChallengeName
then run the following
kubectl patch challenge/urChallengeName -p '{"metadata":{"finalizers":[]}}' --type=merge
and when u do get challenges again it should be gone

Resources