Logstash mix output when using two config files - nginx

I'm using logstash 1.5.6 in Ubuntu.
I wrote two config files in the /etc/logstash/conf.d, specifing different input/output location:
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "api-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/api_template.json"
template_overwrite => true
}
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "mis-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/mis_template.json"
template_overwrite => true
}
}
}
However, I can see data from /mnt/logs/mis_log_access.log and /mnt/logs/nginx/dmt_access.log both shown in index api-%{+YYYY.MM.dd} and mis-%{+YYYY.MM.dd}, which is not I wanted.
What's wrong with the configuration? Thanks.

Logstash reads all the files in your configuration directory and merges them all together into one config.
To make one filter or output section only run for one type of input, use conditionals:
if [type] == "api" {
....
}

It better handle filters with input type
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
if [type] == "api" {
filter {
...
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
if [type] == "mis" {
filter {
...
}
}
File C: output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
}
working with logstash 5.1

Related

Gridsome context variable not passed to page-query

I can't access a primaryTag variable in my GraphQL page-query.
What I want to achieve is on a blog Post page:
display the post content
display the related posts (based on the first tag)
In my gridsome.server.js
api.createPages(async ({ graphql, createPage }) => {
// Use the Pages API here: https://gridsome.org/docs/pages-api
const { data } = await graphql(`{
allPost {
edges {
node {
id
path
tags {
id
}
}
}
}
}`)
data.allPost.edges.forEach(({ node }) => {
createPage({
path: `${node.path}`,
component: './src/templates/Post.vue',
context: {
id: node.id,
path: node.path,
primaryTag: (node.tags[0] && node.tags[0].id) || '',
}
})
})
})
then in my Post.vue
<page-query>
query Post ($path: String!, $primaryTag: String!) {
post: post (path: $path) {
title
path
content
}
related: allPost(
filter: { tags: { contains: [$primaryTag] }, path: { ne: $path } }
) {
edges {
node {
id
title
path
}
}
}
}
</page-query>
Unfortunately I get the following error: `Variable "$primaryTag" of non-null type "String!" must not be null.
Also, as a side note (and that might be the bug issue) I'm using #gridsome/source-filesystem and #gridsome/transformer-remark to create my Post collection.
If you know how to solve this or have a better approach for getting the related posts, comment below.
Libs:
- gridsome version: 0.6.3
- #gridsome/cli version: 0.1.1`

How should I modify logstash.conf to get the field I want?

I use the ELK + Filebeat , all version is 6.4.3 , the OS is windows 10
I add custom field in filebeat.yml , the key name is log_type , the value of log_type is nginx-access
The picture show part of filebeat.yml.
The content of logstash.conf is :
input {
beats {
host => "0.0.0.0"
port => "5544"
}
}
filter {
mutate {
rename => { "[host][name]" => "host" }
}
if [fields][log_type] == "nginx-access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\" \"%{DATA:[nginx][access][x_forwarded_for]}\" %{NUMBER:[nginx][access][request_time]}"] }
}
mutate {
copy => { "[nginx][access][request_time]" => "[nginx][access][requesttime]" }
}
mutate {
convert => {
"[nginx][access][requesttime]" => "float"
}
}
}
}
output {
stdout {
codec => rubydebug { metadata => true }
}
elasticsearch {
hosts => ["localhost:9200"]
}
}
When I use the command :
logstash.bat -f logstash.conf
The output is :
Question 1:
The field in the red box above is "requesttime" and "request_time" , what I want the field is nginx.access.requesttime and nginx.access.request_time,not requesttime and request_time 。 How should I modify logstash.conf to achieve my goal?
Question 2:
When I use the above logstash.conf , the field of the kibana management interface is only "request_time" field .
The picture show this :
If I want the "nginx.access.requesttime" field to also appear in the fields of the Kibana management interface, how should I modify the logstash.conf ?
Question 1:
I believe what you are looking for is
mutate {
copy => { "[nginx][access][request_time]" => "nginx.access.requesttime" }
}
Question 2:
Whethere something is a keyword is determined by the template field mapping in Elasticsearch. Try the option above and see if the issue resolved.
This issue in elastic forum may help you.

Processing custom NGINX log with logstash

I have nginx access log that log request body in the form of json string. eg.
"{\x0A\x22userId\x22 : \x22MyUserID\x22,\x0A\x22title\x22 : \x22\MyTitle\x0A}"
My objective is to store those 2 values (userId and title) into 2 separate fields in Elastic Search.
My Logstash config:
filter {
if [type] == "nginxlog" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG} %{QS:partner_id} %{NUMBER:req_time} %{NUMBER:res_time} %{GREEDYDATA:extra_fields}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
gsub => [
"extra_fields", x22 ,'"' ,
"extra_fields",x0A ,'\n'
]
}
json{
source => "extra_fields"
target => "extra_json"
}
mutate {
add_field => {
"userId => "%{[extra_json][userId]}"
"title"=> "%{[extra_json][title]}"
}
}
}
}
But it's not properly extracted, the value in ES userId = userId, instead of MyUserID. Any clue where is the problem? Or any better idea to achieve the same?

logstash nginx pattern thows the result in _grokparsefailure

I have an nginx patteer which was successfully tested in grokcontructor but when adding it to logstash 1.5.3 the logs do end up with _grokparsefailure
Here is a sample of my access.log:
207.46.13.34 - - [14/Aug/2015:18:33:50 -0400] "GET /tag/dnssec/ HTTP/1.1" 200 1961 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-"
and here is the nignx pattern:
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:indent} %{NGUSER:agent} \[%{HTTPDATE:timestamp}] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:answer} %{NUMBER:byte} "%{URI:referrer}" %{QS:referee} %{QS:agent}
my logstash.conf look like this:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/z0z0.tk.crt"
ssl_key => "/etc/pki/tls/private/z0z0.tk.key"
}
}
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "${NGINXACCESS}" }
}
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
host => "172.17.0.5"
cluster => "clustername"
flush_size => 2000
}
}
You're trying to match "-" into the field referrer using the pattern URI. Unfortunately, "-" is not a valid character in the URI pattern, which is expecting something like "http://..."
There are pattern examples that match a string or a hyphen (like part of the built-in COMMONAPACHELOG):
(?:%{NUMBER:bytes}|-)
which you could adjust to your pattern.
Thanks Alain for your suggestion I have recreated the pattern but having it in /opt/logstash/pattern/nginx did not work so I moved it to the logstash.conf which works and it looks like this:
if [type] == "nginx-access" {
grok {
match => { 'message' => '%{IPORHOST:clientip} %{NGUSER:indent} %{NGUSER:agent} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{URIPATHPARAM:request}(?: HTTP/%{NUMBER:httpversion})?|)\" %{NUMBER:answer} (?:%{NUMBER:byte}|-) (?:\"(?:%{URI:referrer}|-))\" (?:%{QS:referree}) %{QS:agent}' }
}
}

How to modify settings and tasks in plugin based on another plugin's availability?

In the build.sbt file of a plugin I have written I have the following two lines:
scroogeThriftDependencies in Compile := Seq("shared_2.10")
mappings in (Compile,packageBin) ~= { (ms: Seq[(File, String)]) =>
ms filter { case (file, toPath) =>
!toPath.startsWith(s"shared")
}
}
but I only want to do this if the Build also contains the Scrooge plugin. How can this be accomplished?
I have tried the approach below but it didn't work:
lazy val onlyWithScrooge = taskKey[Unit]("Executes only if the Scrooge plugin is part of the build")
onlyWithScrooge := {
val structure: BuildStructure = Project.extract(state.value).structure
val pluginNames = structure.units.values.map { un => un.unit.plugins.detected }
pluginNames.foreach(
plugins => {
plugins.plugins.modules.foreach {
plugin =>
if (plugin._1 == "com.twitter.scrooge.ScroogeSBT") {
// i get here at least
scroogeThriftDependencies in Compile := Seq("shared_2.10")
mappings in (Compile,packageBin) ~= { (ms: Seq[(File, String)]) =>
ms filter { case (file, toPath) =>
!toPath.startsWith(s"shared")
}
}
}
}
}
)
}
(scroogeGen in Compile) <<= (scroogeGen in Compile) dependsOn onlyWithScrooge

Resources