Migrating Mysql data into Elasticsearch using logstash for kibana - kibana

I'm new to kibana.I am working with data migration from MySQL to elasticsearch.How can i do this? Is using jdbc input plugin is the only process??
Here is the logstash.conf file where i specified the input and output:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/kibana"
jdbc_user => "xxx"
jdbc_password => "xxxxx"
jdbc_driver_library => "/root/mysql-connector-java-5.1.30/mysql-connector-java-5.1.30-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM datalog"
}
}
output {
elasticsearch {
"hosts" => "localhost:9200"
}
stdout { codec => rubydebug }
}
after running the above file by using ./logstash -f logstash.conf we are getting the below error:
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console

Related

Logstash Error: agent - Failed to execute action

I've been following this tutorial on how to use ELK stack for nginx logs.
I've created nginx.conf to configure how to get the logs but when i type: bin/logstash -f /etc/logstash/conf.d/nginx.conf
I get this error:
[ERROR] 2020-11-13 14:59:15.254 [Converge
PipelineAction::Create] agent - Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of
[A-Za-z0-9_-], [ \t\r\n], "#", "=>" at line 9, column 8 (byte
135) after input{\n\t\n file{\n path =>
["/var/log/nginx/access.log" , "/var/log/nginx/error.log"]\n
type => "nginx"\n }\n filter{\n \n grok",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in
compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:184:in initialize'",
"org/logstash/execution/JavaBasePipelineExt.java:69:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in
execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:365:in block
in converge_state'"]}
and here's my nginx.conf file:
input{
file{
path => ["/var/log/nginx/access.log" , "/var/log/nginx/error.log"]
type => "nginx"
}
filter{
grok{
match => ["message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
overwrite => ["message"]
}
mutate{
convert => ["response","integer"]
convert => ["bytes","integer"]
convert => ["responsetime","float"]
}
geoip{
source => "clientip"
target => "geoip"
add_tag => ["nginx-geoip"]
}
date {
match ⁼> ["timestamp" , "dd/MMM/YYYY:HH:mm:ss Z"]
remove_field => ["timestamp"]
}
useragent {
source => "agent"
}
}
output{
elasticsearch {
hosts => ["localhost:9200"]
index => "nginx-%{+yyyy.MM.dd}"
document_type => "nginx_logs"
}
}
}
I found similar question but the answer didn't help.
Is there anyone familiar with logstash syntax and help figure out my error
Thank you
You are missing a } to close the input section. Insert it before the filter keyword.
Also, remove the last } in the file.

Logstash: forwardings logs via proxy to logz.io

I want to ship stdout from a running application to logz.io using logstash. Application and logstash are both docker images managed by docker-compose, which does the setup (pull images, network_mode, logging driver etc.). Logstash input is handled via gelf-input-plugin. The shipping to logz.io is handled via tcp-output-plugin.
logstash.conf:
input { gelf {
type => docker
port => 12201 } }
filter { mutate { add_field => { "token" => "${LOGZIOTOKEN}" } } }
output { tcp {
host => "listener.logz.io"
port => 5050
codec => json_lines } }
excerpt from docker-compose.yml:
application:
...
logging:
driver: "gelf"
options:
gelf-address: "udp://0.0.0.0:12201"
This works as expected.
Now there is a TCP proxy server I need to use, to ship the logs from the host (running the logstash instance) to logz.io. Unfortunately I did not find a proxy extension for logstashs tcp-output-plugin. Does anyone has a suggestion for this issue?
The logstash's http output plugin has the attribute proxy. You have to use the logz.io port for shipping with curl: 8070(http)/8071(https).
A working config looks like this:
output { http {
url => "https://listener.logz.io:8071?token=${LOGZIOTOKEN}"
http_method => "post"
format => "json"
content_type => "application/json"
proxy => {
host => "${PROXYHOST}"
port => "${PROXYPORT}"
scheme => 'http'
user => "${PROXYUSER}"
password => "${PROXYPW}"
}}}
You do not need the filter to ship to logz.io like in tcp-output-plugin config. Just add the input and ship it!

masking data in logstash using web service

I am receiving data using Logstash (2.3) and want to mask them or add additional field, all using external web service as a source.
I have a web service available at something like:
someserver:8080/webpage?id=1
I would like to extract a value that I get using this URL, and inject it into data.
my config file looks like:
input {
file {
path => "/opt/logstash/test/*.csv"
start_position => "beginning"
sincedb_path => "/opt/logstash/test/output/test.db"
}
http { url => "http://localhost:8080/webpage"
}
}
filter {
csv {
columns => ["col1", "col2", "col3" ]
separator => ","
skip_empty_columns => true
}
}
output {
stdout { codec => rubydebug }
csv {
fields => ["col1","col2","col3" ]
path => "/opt/logstash/test/output/test.csv"
}
}
what I would like to achieve is to replace each col1 value with a value obtained from such external source.
I found http plugin, but it doesn't look straight forward to me.
tnx

Check if Oracle table exists with Puppet

I'm setting up a Puppet provision file to install, configure and restore a dump file into a Oracle Database.
I would like to include a check in the exec command in order to check if the restore was successful.
Here is what I have so far:
exec {"import-dump":
command => "impdp system/password DUMPFILE=MYDUMP.DMP LOGFILE=import-dump.log SCHEMAS=MYSCHEMA",
path => "/u01/app/oracle/product/11.2.0/xe/bin/",
-- something to check if the import command already runned successfully ---
require => Exec["install-oracle"],
}
I would use an approach like the following:
exec { "import-dump":
command => "impdp system/password DUMPFILE=MYDUMP.DMP LOGFILE=import-dump.log SCHEMAS=MYSCHEMA",
path => "/u01/app/oracle/product/11.2.0/xe/bin/",
unless => "/bin/grep 'terminated successfully' /path/to/import-dump.log",
require => Exec["install-oracle"],
}
In this way you can check if a previous import job was run successfully.

laravel development environment sqlite database does not exist

Trying to use sqlite in development environment. It seems to detect the environment correctly but when I try to migrate to development.sqlite I get exception thrown "database does not exist"
artisan command
php artisan migrate --env=development
bootstrap/start.php
$env = $app->detectEnvironment(array(
'development' => array('localhost'),
));
app/config/development/database.php
<?php
return array(
'default' => 'sqlite',
'connections' => array(
'sqlite' => array(
'driver' => 'sqlite',
'database' => __DIR__.'/../database/development.sqlite',
'prefix' => '',
)
)
);
As far as I know laravel is supposed to create the file if it does not exist but since it didn't I tried manually creating the file and still get the exception thrown.
UPDATE: Maybe something not right with the env because the same thing happens if I try ':memory' for the database.
UPDATE 2: I tried running the sample unit test but add to TestCase.php
/**
* Default preparation for each test
*
*/
public function setUp()
{
parent::setUp(); // Don't forget this!
$this->prepareForTests();
}
/**
* Creates the application.
*
* #return Symfony\Component\HttpKernel\HttpKernelInterface
*/
public function createApplication()
{
$unitTesting = true;
$testEnvironment = 'testing';
return require __DIR__.'/../../bootstrap/start.php';
}
/**
* Migrates the database and set the mailer to 'pretend'.
* This will cause the tests to run quickly.
*
*/
private function prepareForTests()
{
Artisan::call('migrate');
Mail::pretend(true);
}
And this too gives the same exception though the testing env is already shipped with laravel. So I'll see if I can find any new issues on that.
Wow, typos and wrong paths.
Copying the sqlite array from config/database.php into config/development/database.php I forgot to change the path to the development.sqlite file from
__DIR__.'/../database/development.sqlite'
to
__DIR__.'/../../database/development.sqlite'
And for the in memory test it should have been
':memory:'
instead of
':memory'
I noticed that my database.php file had the following
'sqlite' => [
'driver' => 'sqlite',
'database' => env('DB_DATABASE', database_path('database.sqlite')),
'prefix' => '',
],
I changed it to read the following, and it worked just fine.
'sqlite' => [
'driver' => 'sqlite',
'database' => database_path('database.sqlite'),
'prefix' => '',
],
One of the problem which I faced was I use "touch storage/database.sqlite" in terminal, so database is created in Storage folder instead of database folder.
in my config/database.php path is database_path('database.sqlite')
'sqlite' => [
'driver' => 'sqlite',
'database' => database_path('database.sqlite'),
'prefix' => '',
],
than I use command "php artisan migrate" which gave me error "Database (/Applications/MAMP/htdocs/FOLDER_NAME/database/database.sqlite) does
not exist."
so it's obvious database file is not in database folder as It was generated in Storage folder, so copy "database.sqlite" from storage folder or run command "touch database/database.sqlite"
Hope that helps.!!
Well, my answer is kinda outdated, but anyway. I faced the same problem, but with Laravel 5, I am using Windows 7 x64. First I manually created SQLite database called 'db' and placed it into storage directory, then fixed my .env file like this:
APP_ENV=local
APP_DEBUG=true
APP_KEY=oBxQMkpqbENPb07bLccw6Xv7opAiG3Jp
DB_HOST=localhost
DB_DATABASE='db'
DB_USERNAME=''
DB_PASSWORD=''
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
MAIL_DRIVER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null`
I thought it would fix my problems, but the command line keeps telling me that database doesn't exist. And then I just checked the path to db in my database.php file and this is why I put database file into storage directory. But nothing changed. And finally I checked db's extension and it was .db, not .sqlite as default extension you see in your sqlite block in database.php. So this is how I reconfigured sqlite piece:
'sqlite' => [
'driver' => 'sqlite',
'database' => storage_path().'/db.db',
'prefix' => '',
],
And of course don't forget to set sqlite as default database in your database.php file. Good luck!
For me it was that path to database had to be '/var/www/html' + location to the database in your project. In my case database was stored in database/db.sqlite so DB_DATABASE='/var/www/html/database/db.sqlite'
I had the same error while running a GitHub action test workflow.
For me the solution was to define the relative path to the database archive into the workflow file:
on:
...
env:
DB_CONNECTION: sqlite
DB_DATABASE: database/database.sqlite
jobs:
laravel-tests:
...
I think that the previous answers reduce the importance of the config and most likely the developers wanted to get the database file like this:
'sqlite' => [
'driver' => 'sqlite',
'url' => env('DATABASE_URL'),
'database' => database_path(env('DB_DATABASE', 'database').'.sqlite'), // <- like this
'prefix' => '',
'foreign_key_constraints' => env('DB_FOREIGN_KEYS', true),
],
Tested on Laravel 9.x

Resources