I have an nginx patteer which was successfully tested in grokcontructor but when adding it to logstash 1.5.3 the logs do end up with _grokparsefailure
Here is a sample of my access.log:
207.46.13.34 - - [14/Aug/2015:18:33:50 -0400] "GET /tag/dnssec/ HTTP/1.1" 200 1961 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-"
and here is the nignx pattern:
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:indent} %{NGUSER:agent} \[%{HTTPDATE:timestamp}] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:answer} %{NUMBER:byte} "%{URI:referrer}" %{QS:referee} %{QS:agent}
my logstash.conf look like this:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/z0z0.tk.crt"
ssl_key => "/etc/pki/tls/private/z0z0.tk.key"
}
}
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "${NGINXACCESS}" }
}
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
host => "172.17.0.5"
cluster => "clustername"
flush_size => 2000
}
}
You're trying to match "-" into the field referrer using the pattern URI. Unfortunately, "-" is not a valid character in the URI pattern, which is expecting something like "http://..."
There are pattern examples that match a string or a hyphen (like part of the built-in COMMONAPACHELOG):
(?:%{NUMBER:bytes}|-)
which you could adjust to your pattern.
Thanks Alain for your suggestion I have recreated the pattern but having it in /opt/logstash/pattern/nginx did not work so I moved it to the logstash.conf which works and it looks like this:
if [type] == "nginx-access" {
grok {
match => { 'message' => '%{IPORHOST:clientip} %{NGUSER:indent} %{NGUSER:agent} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{URIPATHPARAM:request}(?: HTTP/%{NUMBER:httpversion})?|)\" %{NUMBER:answer} (?:%{NUMBER:byte}|-) (?:\"(?:%{URI:referrer}|-))\" (?:%{QS:referree}) %{QS:agent}' }
}
}
Related
I need to get url with query parameters, but I dont know how.
I need this -> "entityTypeId=172&filter[id]=1&filter[id]=3&filter[id]=5".
In JS i can do like that
var httpBuildQuery = require('http-build-query');
var params = {
entityTypeId: 172,
filter: {
id: [1, 3, 5]
}};
const url = url + "?" + httpBuildQuery(params);
console.log(httpBuildQuery(params));
In PHP
$params = array(
'filter' => array ('ID' => array('1', '3', '5'),),
'entityTypeId' => 172,
);
http_build_query($params);
In dart I tried this
var uri = Uri(
scheme: 'http',
host: 'b24-ybr1v4.bitrix24.ru',
path: '/rest/1/token/crm.item.list.json',
queryParameters: {
'entityTypeId': '172',
'filter': [
{'id': '1'}
],
},
);
But in this case I get error:
The following TypeErrorImpl was thrown while handling a gesture:
Expected a value of type 'String', but got one of type 'IdentityMap<String, String>'
How to get parameter like "filter[id]"?
You could try moving the [id] part into the query parameter key.
var uri = Uri(
scheme: 'http',
host: 'b24-ybr1v4.bitrix24.ru',
path: '/rest/1/token/crm.item.list.json',
queryParameters: {
'entityTypeId': '172',
'filter[id]': ['1', '3', '5'],
},
);
I use the ELK + Filebeat , all version is 6.4.3 , the OS is windows 10
I add custom field in filebeat.yml , the key name is log_type , the value of log_type is nginx-access
The picture show part of filebeat.yml.
The content of logstash.conf is :
input {
beats {
host => "0.0.0.0"
port => "5544"
}
}
filter {
mutate {
rename => { "[host][name]" => "host" }
}
if [fields][log_type] == "nginx-access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\" \"%{DATA:[nginx][access][x_forwarded_for]}\" %{NUMBER:[nginx][access][request_time]}"] }
}
mutate {
copy => { "[nginx][access][request_time]" => "[nginx][access][requesttime]" }
}
mutate {
convert => {
"[nginx][access][requesttime]" => "float"
}
}
}
}
output {
stdout {
codec => rubydebug { metadata => true }
}
elasticsearch {
hosts => ["localhost:9200"]
}
}
When I use the command :
logstash.bat -f logstash.conf
The output is :
Question 1:
The field in the red box above is "requesttime" and "request_time" , what I want the field is nginx.access.requesttime and nginx.access.request_time,not requesttime and request_time 。 How should I modify logstash.conf to achieve my goal?
Question 2:
When I use the above logstash.conf , the field of the kibana management interface is only "request_time" field .
The picture show this :
If I want the "nginx.access.requesttime" field to also appear in the fields of the Kibana management interface, how should I modify the logstash.conf ?
Question 1:
I believe what you are looking for is
mutate {
copy => { "[nginx][access][request_time]" => "nginx.access.requesttime" }
}
Question 2:
Whethere something is a keyword is determined by the template field mapping in Elasticsearch. Try the option above and see if the issue resolved.
This issue in elastic forum may help you.
My components.ts is,
getHomePageData() : void{
this.homeservice.getHomePageData()
.subscribe(
data => {
//console.log("response status ################### "+data.status);
//console.log("getUserData response ************ \n"+JSON.stringify(data));
this.defaultFacilityId = data.response.defaultFacilityId;
this.defaultFacilityName = data.response.defaultFacilityName;
this.enterpriseId = data.response.enterpriseId;
this.enterpriseName = data.response.enterpriseName;
this.facilityList = data.response.facilityList;
this.userName = data.response.userName;
this.showDefaultPopoup();
},
error => {
console.error(error);
//this.errorMessage="Technical error - Contact Support team !" ;
}
);
}
So my component.spec.ts is ,
it('getHomePageData with SUCCESS - getHomePageData()', () => {
backend.connections.subscribe((connection: MockConnection) => {
//expect(connection.request.url).toEqual('http://localhost:8080/MSMTestWebApp/UDM/UdmService/Home/');
expect(connection.request.url).toEqual('http://192.168.61.158:9080/GetUserData');
expect(connection.request.method).toEqual(RequestMethod.Get);
expect(connection.request.headers.get('Content-Type')).toEqual('application/json');
let options = new ResponseOptions({
body:
{
"request": { "url": "/getUserData" },
"response": {
"defaultFacilityName":"3M Health Information Systems",
"enterpriseId":"11.0",
"enterpriseName":"HSA Enterprise",
"defaultFacilityId": "55303.0",
"userName":"Anand"
},
"error": ""
},
status : 200
});
connection.mockRespond(new Response(options));
});
backend.connections.subscribe((data) => {
//expect(data.response.facilityId).toEqual("55303.0");
//expect(subject.handleError).toHaveBeenCalled();
})
service.getHomePageData().subscribe((data) => {
//expect(videos.length).toBe(4);
expect(data.response.defaultFacilityId).toEqual("55303.0");
component.defaultFacilityId = data.response.defaultFacilityId;
component.defaultFacilityName = data.response.defaultFacilityName;
component.enterpriseId = data.response.enterpriseId;
component.enterpriseName = data.response.enterpriseName;
component.userName = data.response.userName;
console.log("$$$$$$$$$$$$$$$$**********$$$$$$$$$$$$$$$$$$$$$");
});
});
When i try to run test case. It got passed. But while I look into the code coverage, it doesn't cover the code shown in red below
Please help to get the full code coverage. Thanks.
In the test you've shown here you don't seem to be calling getHomePageData() from your component
Try building your test like this:
import { fakeAsync, tick } from '#angular/core/testing';
...
it('getHomePageData with SUCCESS - getHomePageData()', fakeAsync(() => {
backend.connections.subscribe((connection: MockConnection) => {
//expect(connection.request.url).toEqual('http://localhost:8080/MSMTestWebApp/UDM/UdmService/Home/');
expect(connection.request.url).toEqual('http://192.168.61.158:9080/GetUserData');
expect(connection.request.method).toEqual(RequestMethod.Get);
expect(connection.request.headers.get('Content-Type')).toEqual('application/json');
let options = new ResponseOptions({
body:
{
"request": { "url": "/getUserData" },
"response": {
"defaultFacilityName":"3M Health Information Systems",
"enterpriseId":"11.0",
"enterpriseName":"HSA Enterprise",
"defaultFacilityId": "55303.0",
"userName":"Anand"
},
"error": ""
},
status : 200
});
connection.mockRespond(new Response(options));
});
// If this function is not automatically called in the component initialisation
component.getHomePageData();
tick();
//you can call expects on your component's properties now
expect(component.defaultFacilityId).toEqual("55303.0");
});
FakeAsync allows you to write tests in a more linear style so you no longer have to subscribe to the service function to write your expectations.
In a FakeAsync test function you can call tick() after a call where an asynchronous operation takes place to simulate a passage of time and then continue with the flow of your code.
You can read more about this here: https://angular.io/docs/ts/latest/testing/#!#fake-async
EDIT - Error Case
To test the error logic you can call mockError or set up an error response using mockRespond on your connection:
it('getHomePageData with ERROR- getHomePageData()', fakeAsync(() => {
backend.connections.subscribe((connection: MockConnection) => {
if (connection.request.url === 'http://192.168.61.158:9080/GetUserData') {
// mockError option
connection.mockError(new Error('Some error'));
// mockRespond option
connection.mockRespond(new Response(new ResponseOptions({
status: 404,
statusText: 'URL not Found',
})));
}
component.getHomePageData();
tick();
//you can call expects now
expect(connection.request.url).toEqual('http://192.168.61.158:9080/GetUserData');
expect(connection.request.method).toEqual(RequestMethod.Get);
expect(connection.request.headers.get('Content-Type')).toEqual('application/json');
expect('you can test your error logic here');
});
What we're doing inside the subscription is making sure that anytime the GetUserData endpoint is called within this test method it will return an error.
Because we test errors and successes separately in the success test there's no need to add the error related settings in the request options.
Are you using JSON data? Then you should probably use map() before using .subscribe().
.map((res:Response) => res.json())
Try organizing your code like this:
ngOnInit() {
this.getHomePageData();
}
getHomePageData() {
this.http.get('your.json')
.map((res:Response) => res.json())
.subscribe(
data => {
this.YourData = data
},
err => console.error(err),
() => console.log('ok')
);
}
Hope it helps,
Cheers
I have nginx access log that log request body in the form of json string. eg.
"{\x0A\x22userId\x22 : \x22MyUserID\x22,\x0A\x22title\x22 : \x22\MyTitle\x0A}"
My objective is to store those 2 values (userId and title) into 2 separate fields in Elastic Search.
My Logstash config:
filter {
if [type] == "nginxlog" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG} %{QS:partner_id} %{NUMBER:req_time} %{NUMBER:res_time} %{GREEDYDATA:extra_fields}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
gsub => [
"extra_fields", x22 ,'"' ,
"extra_fields",x0A ,'\n'
]
}
json{
source => "extra_fields"
target => "extra_json"
}
mutate {
add_field => {
"userId => "%{[extra_json][userId]}"
"title"=> "%{[extra_json][title]}"
}
}
}
}
But it's not properly extracted, the value in ES userId = userId, instead of MyUserID. Any clue where is the problem? Or any better idea to achieve the same?
I'm using logstash 1.5.6 in Ubuntu.
I wrote two config files in the /etc/logstash/conf.d, specifing different input/output location:
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "api-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/api_template.json"
template_overwrite => true
}
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "mis-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/mis_template.json"
template_overwrite => true
}
}
}
However, I can see data from /mnt/logs/mis_log_access.log and /mnt/logs/nginx/dmt_access.log both shown in index api-%{+YYYY.MM.dd} and mis-%{+YYYY.MM.dd}, which is not I wanted.
What's wrong with the configuration? Thanks.
Logstash reads all the files in your configuration directory and merges them all together into one config.
To make one filter or output section only run for one type of input, use conditionals:
if [type] == "api" {
....
}
It better handle filters with input type
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
if [type] == "api" {
filter {
...
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
if [type] == "mis" {
filter {
...
}
}
File C: output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
}
working with logstash 5.1