Logstash: forwardings logs via proxy to logz.io - tcp

I want to ship stdout from a running application to logz.io using logstash. Application and logstash are both docker images managed by docker-compose, which does the setup (pull images, network_mode, logging driver etc.). Logstash input is handled via gelf-input-plugin. The shipping to logz.io is handled via tcp-output-plugin.
logstash.conf:
input { gelf {
type => docker
port => 12201 } }
filter { mutate { add_field => { "token" => "${LOGZIOTOKEN}" } } }
output { tcp {
host => "listener.logz.io"
port => 5050
codec => json_lines } }
excerpt from docker-compose.yml:
application:
...
logging:
driver: "gelf"
options:
gelf-address: "udp://0.0.0.0:12201"
This works as expected.
Now there is a TCP proxy server I need to use, to ship the logs from the host (running the logstash instance) to logz.io. Unfortunately I did not find a proxy extension for logstashs tcp-output-plugin. Does anyone has a suggestion for this issue?

The logstash's http output plugin has the attribute proxy. You have to use the logz.io port for shipping with curl: 8070(http)/8071(https).
A working config looks like this:
output { http {
url => "https://listener.logz.io:8071?token=${LOGZIOTOKEN}"
http_method => "post"
format => "json"
content_type => "application/json"
proxy => {
host => "${PROXYHOST}"
port => "${PROXYPORT}"
scheme => 'http'
user => "${PROXYUSER}"
password => "${PROXYPW}"
}}}
You do not need the filter to ship to logz.io like in tcp-output-plugin config. Just add the input and ship it!

Related

DDEV and D8, httpClient used for internal request fails to connect

I've a multisite installation of Drupal 8, the "main" website expose some REST webservices, locally i've some troubles on testing them, because there's no way for the various sites to see each other, when i try to do something like that
try {
$response = $this->httpClient->get($this->baseUri . '/myendpoint', [
'headers' => [
'Accept' => 'application/json',
'Content-type' => 'application/hal+json',
],
'query' => [
'_format' => 'json',
'myparameters' => 'value'
],
]);
$results = $this->serializer->decode($response->getBody(), 'json');
}
catch (RequestException $e) {
$this->logger->warning($e->getMessage());
}
return $results;
I always receive a timeout and there's no way i can make it work, i've my main website with the usual url project.ddev.site (and $this->baseUri is https://myproject.ddev.site ) and all the other websites are in the form subsite.ddev.local
If i ssh in the project and run ping myproject.ddev.site i see 172.19.0.6
I don't understand why they cannot see each other...
Just for other people who can have a similar problem: my issue was with xdebug i have it with the autoconnect, so when the request from the subsite to the main site was made, it get stuck somewhere (phpstorm didn't stop anywhere by the way) so it made the request time out.
By disabling, or configuring only for the subdomain, and avoiding it to accept the external connenction from unconfigured servers (in phpstorm) it started working, still have to do some work as i need to debug "both sides" of the request, but in this way i can work with that...
I've not thought before to try disabling xdebug because actually it didn't came into my mind...

How to fix curl_error: SSL: no alternative certificate subject name matches target host name 'api.telegram.org'

I am using telegram.php to connect my bot. When I use sendmessage all of thing is ok in my logs but I do not receive anything from the bot.
When I check my log there is a problem like this:
ok: False
curl_error_code: 51
curl_error: SSL: no alternative certificate subject name matches target host name 'api.telegram.org'
I donit know what to do to fix it.
I don't know this telegram bot, but I see that it uses GuzzleHttp.
During the initialization it doesn't accept any configuration Request::initialize()
public static function initialize(Telegram $telegram)
{
if (!($telegram instanceof Telegram)) {
throw new TelegramException('Invalid Telegram pointer!');
}
self::$telegram = $telegram;
self::setClient(new Client(['base_uri' => self::$api_base_uri]));
}
you should check its documentation. I see that there are a lot of setters which makes you able to overwrite the default settings.
What you need is to set the the \GuzzleHttp\RequestOptions::VERIFY to false in the client config:
$this->client = new \GuzzleHttp\Client([
'base_uri' => 'someAccessPoint',
\GuzzleHttp\RequestOptions::HEADERS => [
'User-Agent' => 'some-special-agent',
],
'defaults' => [
\GuzzleHttp\RequestOptions::CONNECT_TIMEOUT => 5,
\GuzzleHttp\RequestOptions::ALLOW_REDIRECTS => true,
],
\GuzzleHttp\RequestOptions::VERIFY => false,
]);
For fix this problem copy this Url to browser and set webhook:
https://api.telegram.org/botTOKEN/setWebhook?url=https://yourwebsite.com
Solution 2 of The Error
Let’s follow these simple steps:
Download this bundle of root certificates: https://curl.haxx.se/ca/cacert.pem
Put in any location of your server.
Open php.ini and add this line:
curl.cainfo = "[the_location]\cacert.pem"
Restart your webserver.
That’s it. 🙂

Migrating Mysql data into Elasticsearch using logstash for kibana

I'm new to kibana.I am working with data migration from MySQL to elasticsearch.How can i do this? Is using jdbc input plugin is the only process??
Here is the logstash.conf file where i specified the input and output:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/kibana"
jdbc_user => "xxx"
jdbc_password => "xxxxx"
jdbc_driver_library => "/root/mysql-connector-java-5.1.30/mysql-connector-java-5.1.30-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM datalog"
}
}
output {
elasticsearch {
"hosts" => "localhost:9200"
}
stdout { codec => rubydebug }
}
after running the above file by using ./logstash -f logstash.conf we are getting the below error:
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console

Symfony & ELK : which encoding should I use with gelf?

I'm trying to monitor a symfony app with the ELK stack.
I'm shipping my logs to logstash with the following configuration :
monolog:
handlers:
main:
type: gelf
publisher:
hostname: elk-host
port: 10514
formatter: monolog.formatter.gelf_message
level: INFO
On kibana, I see that I reiceive the logs but the message is encoded in a strange way; here is an example of what kibana displays :
x\x9CMP\xC1n\x830\f\xFD\u0015+\xA7V\xAAB\xA1\f(\xD7j;Nڴ\xDD\"Ui0`)\u0004D\(\x9A\xF6\xEF\v\x9B\xD6\xEDf\xBFg\xFB\xF9\xBD\u000F1\xE1\xE8\xA9w\xA2\u0014\xB1܋\x9Dh{ϡ\u0019\xFA\x915Y\xCF^\xDA\xDEh\e\u0018\xDF\u0006\xECܡ\xF7\xBA\xC10\xF2\x8A5\x8E\xE8\f\xB9\u0006\xB8EP\xC2\xF4#*\u0001xct\xEBQ\xB8,#\xEC\xC1\xE9\u000EaSaM\u000E\xAB\u0015l\x90\x9F\u0003\xB6\xD9n\x81
Here is my monolog configuration file :
input {
gelf {
codec => "json"
}
syslog {
port => 10514
type => "syslog"
}
}
filter {
}
output {
elasticsearch {}
}
I tried to add an encoding option (charset => "UTF-8") but it was not better.
Also why are my logs displayed as "syslog" type instead of "gelf" that I specified in monolog config ?
Your sending GELF (JSON) output to a SYSLOG listener, you need to change to send it to the GELF port rather than the SYSLOG port

Logstash gelf not receiving messages

I'm trying to send my symfony2 logs to a logstash server but the server doesn't receive the logs =/
My logstash conf in /etc/logstash/conf.d/logstash.conf is
input {
gelf {
port => "12201"
host => "0.0.0.0"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {
debug => true
debug_format => "json"
}
}
The files in /var/log/logstash are empty. With monolog, I caught the sended strings :
{"_facility":"request",
"_ctxt_route_parameters":"...",
"_ctxt_request_uri":"...",
"version":"1.0",
"short_message":"...",
"full_message":null,
"host":"dev",
"timestamp":1462196712,
"level":6}
What is wrong with my config ? I tested to get elasticsearch indexes :
curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 1 1 1 0 3.1kb 3.1kb
Thanks for helping

Resources