I use the ELK + Filebeat , all version is 6.4.3 , the OS is windows 10
I add custom field in filebeat.yml , the key name is log_type , the value of log_type is nginx-access
The picture show part of filebeat.yml.
The content of logstash.conf is :
input {
beats {
host => "0.0.0.0"
port => "5544"
}
}
filter {
mutate {
rename => { "[host][name]" => "host" }
}
if [fields][log_type] == "nginx-access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\" \"%{DATA:[nginx][access][x_forwarded_for]}\" %{NUMBER:[nginx][access][request_time]}"] }
}
mutate {
copy => { "[nginx][access][request_time]" => "[nginx][access][requesttime]" }
}
mutate {
convert => {
"[nginx][access][requesttime]" => "float"
}
}
}
}
output {
stdout {
codec => rubydebug { metadata => true }
}
elasticsearch {
hosts => ["localhost:9200"]
}
}
When I use the command :
logstash.bat -f logstash.conf
The output is :
Question 1:
The field in the red box above is "requesttime" and "request_time" , what I want the field is nginx.access.requesttime and nginx.access.request_time,not requesttime and request_time 。 How should I modify logstash.conf to achieve my goal?
Question 2:
When I use the above logstash.conf , the field of the kibana management interface is only "request_time" field .
The picture show this :
If I want the "nginx.access.requesttime" field to also appear in the fields of the Kibana management interface, how should I modify the logstash.conf ?
Question 1:
I believe what you are looking for is
mutate {
copy => { "[nginx][access][request_time]" => "nginx.access.requesttime" }
}
Question 2:
Whethere something is a keyword is determined by the template field mapping in Elasticsearch. Try the option above and see if the issue resolved.
This issue in elastic forum may help you.
Related
I try to add a custom user field to the user by using WPGraphQL. Therefore I tried to recreate the example in the official WPGraphQL documentation https://docs.wpgraphql.com/extending/fields/#register-fields-to-the-schema :
add_action('graphql_init', function () {
$hobbies = [
'type' => ['list_of' => 'String'],
'description' => __('Custom field for user mutations', 'your-textdomain'),
'resolve' => function ($user) {
$hobbies = get_user_meta($user->userId, 'hobbies', true);
return !empty($hobbies) ? $hobbies : [];
},
];
register_graphql_field('User', 'hobbies', $hobbies);
register_graphql_field('CreateUserInput', 'hobbies', $hobbies);
register_graphql_field('UpdateUserInput', 'hobbies', $hobbies);
});
I already changed the type from \WPGraphQL\Types::list_of( \WPGraphQL\Types::string() ) to ['list_of' => 'String'].
If I now execute the updateUser mutation my hobbies don't get updated. What am I dowing wrong?
Mutation:
mutation MyMutation {
__typename
updateUser(input: {clientMutationId: "tempId", id: "dXNlcjox", hobbies: ["football", "gaming"]}) {
clientMutationId
user {
hobbies
}
}
}
Output:
{
"data": {
"__typename": "RootMutation",
"updateUser": {
"clientMutationId": "tempId",
"user": {
"hobbies": []
}
}
}
}
Thanks to xadm, the only thing I forgot was to really mutate the field. I was a bit confused by the documentation, my fault. (I really am new to WPGraphQL btw)
Here's what has to be added:
add_action('graphql_user_object_mutation_update_additional_data', 'graphql_register_user_mutation', 10, 5);
function graphql_register_user_mutation($user_id, $input, $mutation_name, $context, $info)
{
if (isset($input['hobbies'])) {
// Consider other sanitization if necessary and validation such as which
// user role/capability should be able to insert this value, etc.
update_user_meta($user_id, 'hobbies', $input['hobbies']);
}
}
I can't access a primaryTag variable in my GraphQL page-query.
What I want to achieve is on a blog Post page:
display the post content
display the related posts (based on the first tag)
In my gridsome.server.js
api.createPages(async ({ graphql, createPage }) => {
// Use the Pages API here: https://gridsome.org/docs/pages-api
const { data } = await graphql(`{
allPost {
edges {
node {
id
path
tags {
id
}
}
}
}
}`)
data.allPost.edges.forEach(({ node }) => {
createPage({
path: `${node.path}`,
component: './src/templates/Post.vue',
context: {
id: node.id,
path: node.path,
primaryTag: (node.tags[0] && node.tags[0].id) || '',
}
})
})
})
then in my Post.vue
<page-query>
query Post ($path: String!, $primaryTag: String!) {
post: post (path: $path) {
title
path
content
}
related: allPost(
filter: { tags: { contains: [$primaryTag] }, path: { ne: $path } }
) {
edges {
node {
id
title
path
}
}
}
}
</page-query>
Unfortunately I get the following error: `Variable "$primaryTag" of non-null type "String!" must not be null.
Also, as a side note (and that might be the bug issue) I'm using #gridsome/source-filesystem and #gridsome/transformer-remark to create my Post collection.
If you know how to solve this or have a better approach for getting the related posts, comment below.
Libs:
- gridsome version: 0.6.3
- #gridsome/cli version: 0.1.1`
I am writing a Cypress test to log in to a website. There are username and password fields and a Submit button. Mostly logins are straightforward, but sometimes a warning dialog appears first that has to be dismissed.
I tried this:
cy.get('#login-username').type('username');
cy.get('#login-password').type(`password{enter}`);
// Check for a possible warning dialog and dismiss it
if (cy.get('.warning')) {
cy.get('#warn-dialog-submit').click();
}
Which works fine, except that the test fails if the warning doesn't appear:
CypressError: Timed out retrying: Expected to find element: '.warning', but never found it.
Then I tried this, which fails because the warning doesn't appear fast enough, so Cypress.$ doesn't find anything:
cy.get('#login-username').type('username');
cy.get('#login-password').type(`password{enter}`);
// Check for a possible warning dialog and dismiss it
if (Cypress.$('.warning').length > 0) {
cy.get('#warn-dialog-submit').click();
}
What is the correct way to check for the existence of an element? I need something like cy.get() that doesn't complain if the element can't be found.
export function clickIfExist(element) {
cy.get('body').then((body) => {
if (body.find(element).length > 0) {
cy.get(element).click();
}
});
}
export const getEl = name => cy.get(`[data-cy="${name}"]`)
export const checkIfElementPresent = (visibleEl, text) => {
cy.document().then((doc) => {
if(doc.querySelectorAll(`[data-cy=${visibleEl}]`).length){
getEl(visibleEl).should('have.text', text)
return ;
}
getEl(visibleEl).should('not.exist')})}
I have done it with pure js.
cy.get('body').then((jqBodyWrapper) => {
if (jqBodyWrapper[0].querySelector('.pager-last a')) {
cy.get('.pager-last a').then(jqWrapper => {
// hardcoded due to similarities found on page
const splitLink = jqWrapper[0].href.split("2C");
AMOUNT_OF_PAGES_TO_BE_RETRIEVED = Number(splitLink[splitLink.length - 1]) + 1;
})
} else {
AMOUNT_OF_PAGES_TO_BE_RETRIEVED = 1;
}
});
I'm trying to check if element exists on body
cy.get('body').then((jqBodyWrapper) => {
With a pure js querySelector
if (jqBodyWrapper[0].querySelector('.pager-last a')) {
Then I fire my cy.get
cy.get('.pager-last a').then(jqWrapper => {
The hasClass() or for CSS selector has() is an inbuilt method in jQuery which checks whether the elements with the specified class name exists or not. You can then return a boolean to perform assertion control.
Cypress.Commands.add('isExistElement', selector => {
cy.get('body').then(($el) => {
if ($el.has(selector)) {
return true
} else {
return false
}
})
});
Then, it can be made into a special cypress method with TypeScript file (index.d.ts) file and can be in the form of a chainable.
declare namespace Cypress {
interface Chainable {
isExistElement(cssSelector: string): Cypress.Chainable<boolean>
}
}
As in the example below:
shouldSeeCreateTicketTab() {
cy.isExistElement(homePageSelector.createTicketTab).should("be.true");
}
I have nginx access log that log request body in the form of json string. eg.
"{\x0A\x22userId\x22 : \x22MyUserID\x22,\x0A\x22title\x22 : \x22\MyTitle\x0A}"
My objective is to store those 2 values (userId and title) into 2 separate fields in Elastic Search.
My Logstash config:
filter {
if [type] == "nginxlog" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG} %{QS:partner_id} %{NUMBER:req_time} %{NUMBER:res_time} %{GREEDYDATA:extra_fields}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
gsub => [
"extra_fields", x22 ,'"' ,
"extra_fields",x0A ,'\n'
]
}
json{
source => "extra_fields"
target => "extra_json"
}
mutate {
add_field => {
"userId => "%{[extra_json][userId]}"
"title"=> "%{[extra_json][title]}"
}
}
}
}
But it's not properly extracted, the value in ES userId = userId, instead of MyUserID. Any clue where is the problem? Or any better idea to achieve the same?
I'm using logstash 1.5.6 in Ubuntu.
I wrote two config files in the /etc/logstash/conf.d, specifing different input/output location:
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "api-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/api_template.json"
template_overwrite => true
}
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
filter {
...
}
output {
if "_grokparsefailure" not in [tags] {
elasticsearch {
host => "localhost"
protocol => "http"
index => "mis-%{+YYYY.MM.dd}"
template => "/opt/logstash/template/mis_template.json"
template_overwrite => true
}
}
}
However, I can see data from /mnt/logs/mis_log_access.log and /mnt/logs/nginx/dmt_access.log both shown in index api-%{+YYYY.MM.dd} and mis-%{+YYYY.MM.dd}, which is not I wanted.
What's wrong with the configuration? Thanks.
Logstash reads all the files in your configuration directory and merges them all together into one config.
To make one filter or output section only run for one type of input, use conditionals:
if [type] == "api" {
....
}
It better handle filters with input type
File A:
input {
file {
type => "api"
path => "/mnt/logs/api_log_access.log"
}
}
if [type] == "api" {
filter {
...
}
}
File B:
input {
file {
type => "mis"
path => "/mnt/logs/mis_log_access.log"
}
}
if [type] == "mis" {
filter {
...
}
}
File C: output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
}
working with logstash 5.1