I'm getting unknown error:52 when trying to connect to my Bonsai Elasticsearch on heroku.
I use FOSElastica with Symfony. It has worked before, but suddenly it has stopped.
These are my settings:
fos_elastica:
clients:
default:
host: %elasticsearch.host%
port: %elasticsearch.port%
headers:
Authorization: "Basic %elasticsearch.token%"
Where the elasticsearch.token is generated using this:
php -r "Print base64_encode('your_auth_username' . ':' . 'your_auth_password');"
My host is of the format:
username:password#myhost.net
And my port is: 443 (80 doesn't work as wel).
No further logging is given ...
Try remove headers, it helped me.
Related
I'm trying to use fluent-bit to ship logs from log files to telegraf which's listening on a port 8094. I'm able to send data to this port via terminal like this
echo "some_log_data" | nc localhost 8094
but when I'm using fluent-bit formward output plugin to send data to the same port, it's giving this error in the fluent-bit logs
fluent-bit_1 | [2019/11/21 11:14:44] [error] [io] TCP connection failed: localhost:8094 (Connection refused)
fluent-bit_1 | [2019/11/21 11:14:44] [error] [out_fw] no upstream connections available
This's my docker-compose file:
version: '3'
services:
# Define a Telegraf service
telegraf:
image: telegraf
volumes:
- ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
ports:
- "8092:8092/udp"
- "8094:8094"
- "8125:8125/udp"
- "9126:9126"
networks:
- mynet
fluent-bit:
image: fluent/fluent-bit:1.3.2-debug
volumes:
- ./fluent-bit:/fluent-bit/etc
- ./access_logs/localhost_access_log:/logs
depends_on:
- telegraf
networks:
- mynet
networks:
mynet:
fluent-bit.conf:
[SERVICE]
Flush 2
Parsers_File parsers.conf
[INPUT]
Name tail
Tag cuic.logs
Path /logs/*.log
Path_Key File_Path
Multiline On
Parser_Firstline start
[OUTPUT]
Name forward
Match *
Host localhost
Port 8094
Tag cuic.logs
telegraf.conf:
[[outputs.file]]
files = ["/tmp/metrics.out"]
data_format = "json"
json_timestamp_units = "1s"
[[inputs.socket_listener]]
service_address = "tcp://:8094"
socket_mode = "777"
data_format = "grok"
grok_patterns = ["%{CUSTOM_LOG}"]
grok_custom_patterns = '''
SOME_GROK_PATTERN
'''
[[aggregators.histogram]]
period = "10s"
drop_original = false
[[aggregators.histogram.config]]
buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
measurement_name = "access_log"
fields = ["resp_time"]
Can someone please help me figure out what I did wrong?
I think the problem is when using the hostname "localhost". For the container localhost at a network level will be its own network scope, it won't be able to access the other container TCP Port as desired.
You can read more about the same problem here:
How to share localhost between two different Docker containers?
and... note that Forward output protocol in Fluent Bit uses a binary protocol as opposed to the normal JSON that I suspect you want to use. Use the tcp output plugin instead.
It's definitely the output/input plugins you are using. Telegraf has a FluentD plugin here, and it looks like this:
# Read metrics exposed by fluentd in_monitor plugin
[[inputs.fluentd]]
## This plugin reads information exposed by fluentd (using /api/plugins.json endpoint).
##
## Endpoint:
## - only one URI is allowed
## - https is not supported
endpoint = "http://localhost:24220/api/plugins.json"
## Define which plugins have to be excluded (based on "type" field - e.g. monitor_agent)
exclude = [
"monitor_agent",
"dummy",
]
Your Fluent-Bit http output config would look like this:
[INPUT]
Name cpu
Tag cpu
[OUTPUT]
Name http
Match *
Host 192.168.2.3
Port 80
URI /something
But Fluent-Bit also has an InfluxDB output plugin.
Fluent-bit has an out plugin named forward, it can forward the output according to fluentd protocol. You can set up it according to this doc: https://docs.fluentbit.io/manual/pipeline/inputs/forward
Then, you can find the telegraf has an input plugin named fluentd, set it as input and then gather metrics from the fluentd client endpoint which can fulfill your requirement.
I've been using elasticsearch, metricbeat and elastalert to watch my server. I have nginx intalled on it that is been used as a reverse proxy and I need to send an to it if nginx drop or return some error, I have already some alerts configured but how can I make a rule to send alert to nginx when it drop or return some error.
Thank a lot
Metricbeat is just for data about the system resources usage. What you need is installing filebeat and activating the nginx module. Then you can use the rule type any of elastalert and filter by fileset.module: nginx and fileset.name: error:
name: your rule name
index: filebeat-*
type: any
filter:
- term:
fileset.module: "nginx"
- term:
fileset.name: "error"
alert:
- "slack"
... # your slack config stuff
realert:
minutes: 1
Just wondering what I am missing here when trying to create an API with Tyk Dashboard.
My setup is:
Nginx > Apache Tomcat 8 > Java Web Application > (database)
Nginx is already working, redirecting calls to apache tomcat at default port 8080.
Example: tomcat.myserver.com/webapp/get/1
200-OK
I have setup tyk-dashboard and tyk-gateway previously as follows using a custom node port 8011:
Tyk dashboard:
$ sudo /opt/tyk-dashboard/install/setup.sh --listenport=3000 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics --tyk_api_hostname=$HOSTNAME --tyk_node_hostname=http://127.0.0.1 --tyk_node_port=8011 --portal_root=/portal --domain="dashboard.tyk-local.com"
Tyk gateway:
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=127.0.0.1 --redisport=6379 --domain=""
/etc/hosts already configured (not really needed):
127.0.0.1 dashboard.tyk-local.com
127.0.0.1 portal.tyk-local.com
Tyk Dashboard configurations (nothing special here):
API name: foo
Listen path: /foo
API slug: foo
Target URL: tomcat.myserver.com/webapp/
What URI I suppose to call? Is there any setup I need to add in Nginx?
myserver.com/foo 502 nginx
myserver.com:8011/foo does not respond
foo.myserver.com 502 nginx
(everything is running under the same server)
SOLVED:
Tyk Gateway configuration was incorrect.
Needed to add --mongo and remove --domain directives at setup.sh :
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics
So, calling curl -H "Authorization: null" 127.0.0.1:8011/foo
I get:
{
"error": "Key not authorised"
}
I am not sure about the /foo path. I think that was previously what the /hello path is. But it appears there is a key not authorized issue. If the call is made using the Gateway API, then the secret value may be missing. It is required when making calls to the gateway (except the hello and reload paths)
x-tyk-authorization: <your-secret>
However, since there is a dashboard present, then I would suggest using the Dashboard APIs to create the API definition instead.
It's my first time I'm trying to use SolrBundle in my symfony project istalled on xampp. Always when I try to use solr:index:populate I receive error mentioned in title of the post. What am I doing wrong? Many thanks for help in advance. Below my configuration
config.yml
fs_solr:
endpoints:
core0:
host: host
port: 8983
path: /solr/core0
core: corename
timeout: 5
I am new to ELK Stack.
I am trying to implement it Using Windows (ELK Server ) and Vagrant Unix CentOS VM ( Filebeat Shipper )
For starters, I am trying to ship Unix Syslog to ELK server and see how it works
I have configured the central.conf file for Logstash on my Windows Machine as
input{
beats{
port => 5044
}
}
output{
stdout{ }
elasticsearch{
hosts => ["http://localhost:9200"] }
}
and Filebeat YAML on Unix (CentOS - 7) is configured as
filebeat:
prospectors:
-
paths:
-"/var/log/*.log"
input_type: log
document_type: beat
registry: "/var/lib/filebeat"
output:
logstash:
hosts: ["127.0.0.1:5044"]
logging:
to_files: true
files:
path: "/var/log/filebeat"
name: filebeat.log
rotateeverybytes: 10485760
level: debug
Elasticsearch and Logstash is running properly on my windows machine
I am facing the following two issues right now,
1.When i try to run filebeat shipper on Unix , it gives me the below error
[root#localhost filebeat]# filebeat -e -v -c filebeat.yml -d "*"
2016/05/08 11:07:00.404841 beat.go:135: DBG Initializing output plugins
2016/05/08 11:07:00.404873 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/05/08 11:07:00.404886 publish.go:269: INFO No outputs are defined. Please define one under the output section.
Error Initialising publisher: No outputs are defined. Please define one under the output section.
2016/05/08 11:07:00.404902 beat.go:140: CRIT No outputs are defined. Please define one under the output section.
2 . When i saw Logstash logs , i found out , its trying to listen Beats input on "0.0.0.0:5044" rather than on "127.0.0.1:5044"
{:timestamp=>"2016-05-08T16:36:07.158000+0530", :message=>"Beats inputs: Starting input listener", :address=>"0.0.0.0:5044", :level=>:info}
Are these two issues interrelated , how can i resolve them , could someone please help me out and point me in the right direction to get this working.
Really Appreciate any help you could provide.
The error says:
No outputs are defined. Please define one under the output section.
If your yaml file is same as in the question, it is wrong. Because the indentation is important in yaml. It must be like:
...
output:
logstash:
hosts: ["127.0.0.1:5044"]
...