Processing custom NGINX log with logstash - nginx

I have nginx access log that log request body in the form of json string. eg.
"{\x0A\x22userId\x22 : \x22MyUserID\x22,\x0A\x22title\x22 : \x22\MyTitle\x0A}"
My objective is to store those 2 values (userId and title) into 2 separate fields in Elastic Search.
My Logstash config:
filter {
if [type] == "nginxlog" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG} %{QS:partner_id} %{NUMBER:req_time} %{NUMBER:res_time} %{GREEDYDATA:extra_fields}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
gsub => [
"extra_fields", x22 ,'"' ,
"extra_fields",x0A ,'\n'
]
}
json{
source => "extra_fields"
target => "extra_json"
}
mutate {
add_field => {
"userId => "%{[extra_json][userId]}"
"title"=> "%{[extra_json][title]}"
}
}
}
}
But it's not properly extracted, the value in ES userId = userId, instead of MyUserID. Any clue where is the problem? Or any better idea to achieve the same?

Related

Custom field not saved

I try to add a custom user field to the user by using WPGraphQL. Therefore I tried to recreate the example in the official WPGraphQL documentation https://docs.wpgraphql.com/extending/fields/#register-fields-to-the-schema :
add_action('graphql_init', function () {
$hobbies = [
'type' => ['list_of' => 'String'],
'description' => __('Custom field for user mutations', 'your-textdomain'),
'resolve' => function ($user) {
$hobbies = get_user_meta($user->userId, 'hobbies', true);
return !empty($hobbies) ? $hobbies : [];
},
];
register_graphql_field('User', 'hobbies', $hobbies);
register_graphql_field('CreateUserInput', 'hobbies', $hobbies);
register_graphql_field('UpdateUserInput', 'hobbies', $hobbies);
});
I already changed the type from \WPGraphQL\Types::list_of( \WPGraphQL\Types::string() ) to ['list_of' => 'String'].
If I now execute the updateUser mutation my hobbies don't get updated. What am I dowing wrong?
Mutation:
mutation MyMutation {
__typename
updateUser(input: {clientMutationId: "tempId", id: "dXNlcjox", hobbies: ["football", "gaming"]}) {
clientMutationId
user {
hobbies
}
}
}
Output:
{
"data": {
"__typename": "RootMutation",
"updateUser": {
"clientMutationId": "tempId",
"user": {
"hobbies": []
}
}
}
}
Thanks to xadm, the only thing I forgot was to really mutate the field. I was a bit confused by the documentation, my fault. (I really am new to WPGraphQL btw)
Here's what has to be added:
add_action('graphql_user_object_mutation_update_additional_data', 'graphql_register_user_mutation', 10, 5);
function graphql_register_user_mutation($user_id, $input, $mutation_name, $context, $info)
{
if (isset($input['hobbies'])) {
// Consider other sanitization if necessary and validation such as which
// user role/capability should be able to insert this value, etc.
update_user_meta($user_id, 'hobbies', $input['hobbies']);
}
}

How should I modify logstash.conf to get the field I want?

I use the ELK + Filebeat , all version is 6.4.3 , the OS is windows 10
I add custom field in filebeat.yml , the key name is log_type , the value of log_type is nginx-access
The picture show part of filebeat.yml.
The content of logstash.conf is :
input {
beats {
host => "0.0.0.0"
port => "5544"
}
}
filter {
mutate {
rename => { "[host][name]" => "host" }
}
if [fields][log_type] == "nginx-access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\" \"%{DATA:[nginx][access][x_forwarded_for]}\" %{NUMBER:[nginx][access][request_time]}"] }
}
mutate {
copy => { "[nginx][access][request_time]" => "[nginx][access][requesttime]" }
}
mutate {
convert => {
"[nginx][access][requesttime]" => "float"
}
}
}
}
output {
stdout {
codec => rubydebug { metadata => true }
}
elasticsearch {
hosts => ["localhost:9200"]
}
}
When I use the command :
logstash.bat -f logstash.conf
The output is :
Question 1:
The field in the red box above is "requesttime" and "request_time" , what I want the field is nginx.access.requesttime and nginx.access.request_time,not requesttime and request_time 。 How should I modify logstash.conf to achieve my goal?
Question 2:
When I use the above logstash.conf , the field of the kibana management interface is only "request_time" field .
The picture show this :
If I want the "nginx.access.requesttime" field to also appear in the fields of the Kibana management interface, how should I modify the logstash.conf ?
Question 1:
I believe what you are looking for is
mutate {
copy => { "[nginx][access][request_time]" => "nginx.access.requesttime" }
}
Question 2:
Whethere something is a keyword is determined by the template field mapping in Elasticsearch. Try the option above and see if the issue resolved.
This issue in elastic forum may help you.

Logstash mix output when using two config files

I'm using logstash 1.5.6 in Ubuntu.
I wrote two config files in the /etc/logstash/conf.d, specifing different input/output location:
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "api-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/api_template.json"
template_overwrite => true
}
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "mis-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/mis_template.json"
template_overwrite => true
}
}
}
However, I can see data from /mnt/logs/mis_log_access.log and /mnt/logs/nginx/dmt_access.log both shown in index api-%{+YYYY.MM.dd} and mis-%{+YYYY.MM.dd}, which is not I wanted.
What's wrong with the configuration? Thanks.
Logstash reads all the files in your configuration directory and merges them all together into one config.
To make one filter or output section only run for one type of input, use conditionals:
if [type] == "api" {
....
}
It better handle filters with input type
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
if [type] == "api" {
filter {
...
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
if [type] == "mis" {
filter {
...
}
}
File C: output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
}
working with logstash 5.1

sum and multiply of multi field for Aggregations in elasticsearch and nest

I am new to elasticsearch. I use to nest to query data from elasticsearch.
What do i want the way to get result expression of multi fields after aggregations.
example:
class public InfoComputer
{
int Id {get;set;}
string Name {get;set;}
int price {get;set;}
int quantity {get;set;}
};
var result = client.Search<InfoComputer>(s => s
.Aggregations(a => a
.Terms("names", st => st
.Field(o => o.Name)
.Aggregations(aa => aa
.Sum("price", m => m
.Field(o => o.price)
)
)
)
)
);
this code only get Sum attribute price.
How can I get Sum (price * quantity) with group attribute Name?
In newer versions you need to specify inline in script tag
"aggs": {
"oltotal": {
"sum": {
"script": { "inline": "doc['price'].value*doc['amount'].value" }
}
}
}
You need a scripted sum aggregation:
{
"aggs": {
"terms_agg": {
"terms": {
"field": "name"
},
"aggs": {
"sum_agg": {
"sum": {
"script": "doc['price'].value * doc['quantity'].value"
}
}
}
}
}
}
There is script support in NEST, you can modify your aggregation like this, i.e. by using Script() instead of Field():
var result = client.Search<InfoComputer>(s => s
.Aggregations(a => a
.Terms("names", st => st
.Field(o => o.Name)
.Aggregations(aa => aa
.Sum("price", m => m
.Script("doc['price'].value * doc['quantity'].value")
)
)
)
)
);

logstash nginx pattern thows the result in _grokparsefailure

I have an nginx patteer which was successfully tested in grokcontructor but when adding it to logstash 1.5.3 the logs do end up with _grokparsefailure
Here is a sample of my access.log:
207.46.13.34 - - [14/Aug/2015:18:33:50 -0400] "GET /tag/dnssec/ HTTP/1.1" 200 1961 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-"
and here is the nignx pattern:
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:indent} %{NGUSER:agent} \[%{HTTPDATE:timestamp}] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:answer} %{NUMBER:byte} "%{URI:referrer}" %{QS:referee} %{QS:agent}
my logstash.conf look like this:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/z0z0.tk.crt"
ssl_key => "/etc/pki/tls/private/z0z0.tk.key"
}
}
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "${NGINXACCESS}" }
}
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
host => "172.17.0.5"
cluster => "clustername"
flush_size => 2000
}
}
You're trying to match "-" into the field referrer using the pattern URI. Unfortunately, "-" is not a valid character in the URI pattern, which is expecting something like "http://..."
There are pattern examples that match a string or a hyphen (like part of the built-in COMMONAPACHELOG):
(?:%{NUMBER:bytes}|-)
which you could adjust to your pattern.
Thanks Alain for your suggestion I have recreated the pattern but having it in /opt/logstash/pattern/nginx did not work so I moved it to the logstash.conf which works and it looks like this:
if [type] == "nginx-access" {
grok {
match => { 'message' => '%{IPORHOST:clientip} %{NGUSER:indent} %{NGUSER:agent} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{URIPATHPARAM:request}(?: HTTP/%{NUMBER:httpversion})?|)\" %{NUMBER:answer} (?:%{NUMBER:byte}|-) (?:\"(?:%{URI:referrer}|-))\" (?:%{QS:referree}) %{QS:agent}' }
}
}

Resources