I am trying to simulate syslog flume agent which eventually should put the data into HDFS.
My scenario follows:
the syslog flume agent is running on physical server A, following are the configuration details:
===
syslog_agent.sources = syslog_source
syslog_agent.channels = MemChannel
syslog_agent.sinks = HDFS
# Describing/Configuring the source
syslog_agent.sources.syslog_source.type = syslogudp
#syslog_agent.sources.syslog_source.bind = 0.0.0.0
syslog_agent.sources.syslog_source.bind = localhost
syslog_agent.sources.syslog_source.port = 514
# Describing/Configuring the sink
syslog_agent.sinks.HDFS.type=hdfs
syslog_agent.sinks.HDFS.hdfs.path=hdfs://<IP_ADD_OF_NN>:8020/user/ec2-user/syslog
syslog_agent.sinks.HDFS.hdfs.fileType=DataStream
syslog_agent.sinks.HDFS.hdfs.writeformat=Text
syslog_agent.sinks.HDFS.hdfs.batchSize=1000
syslog_agent.sinks.HDFS.hdfs.rollSize=0
syslog_agent.sinks.HDFS.hdfs.rollCount=10000
syslog_agent.sinks.HDFS.hdfs.rollInterval=600
# Describing/Configuring the channel
syslog_agent.channels.MemChannel.type=memory
syslog_agent.channels.MemChannel.capacity=10000
syslog_agent.channels.MemChannel.transactionCapacity=1000
#Bind sources and sinks to the channel
syslog_agent.sources.syslog_source.channels = MemChannel
syslog_agent.sinks.HDFS.channel = MemChannel
I am sending syslog "logs" from different physical server B using the inbuilt utility "logger", like this:
sudo logger --server <IP_Address_physical_server_A> --port 514 --udp
I do see yje log messages going into physical server-A 's path --> /var/log/messages
But I don't see any message going into HDFS; it seems the the flume agent isn't able to get any data, even though the messages are going from server-B to server-A.
Am I doing something wrong here? Can anyone help me how to resolve this?
EDIT
The following is the output of netstat command on server-A where the syslog daemon is running:
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 573/rsyslogd
tcp6 0 0 :::514 :::* LISTEN 573/rsyslogd
udp 0 0 0.0.0.0:514 0.0.0.0:* 573/rsyslogd
udp6 0 0 :::514 :::* 573/rsyslogd
I'm not sure what logger --server.gives you, but most examples I have seen use netcat.
In any case, you've set batchSize=1000, so until you send 1000 messages, Flume will not write to HDFS.
Keep in mind, HDFS is not a streaming platform, and prefers not to have small files.
If you're looking for log collection, look into Elasticsearch or Solr fronted by a Kafka topic
Related
My current CentOS 7 server is already running Apache web server 2.4x, using default ports 80 and 443. The Puppet Enterprise version 2019.x, using nginx(pe-nginx to be exact), is configured by default to use the exact same ports.
What needs to be changed to make pe-nginx web server use ports 8090 and 444 instead of the default 80 and 443?
According to: https://puppet.com/docs/pe/2019.0/config_console.html I should Disable the HTTPS redirect. Here were instructions I tried:
The pe-nginx webserver listens on port 80 by default. If you need to run your own service on port 80, you can disable the HTTPS redirect.
Edit your Hiera.yaml file to disable HTTP redirect.
puppet_enterprise::profile::console::proxy::http_redirect::enable_http_redirect: false
This is the modified file: /etc/puppetlabs/code/environments/production/hiera.yaml
---
version: 5
defaults:
# The default value for "datadir" is "data" under the same directory as the hiera.yaml
# file (this file)
# When specifying a datadir, make sure the directory exists.
# See https://puppet.com/docs/puppet/latest/environments_about.html for further details on environments.
# datadir: data
# data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "nodes/%{::trusted.certname}.yaml"
- name: "Other YAML hierarchy levels"
paths:
- "common.yaml"
puppet_enterprise::profile::console::proxy::http_redirect::enable_http_redirect: false
I am new to yaml but can see that this is probably not right but tried it anyway.
It does not say what to do after changing the file to implement the change, this is what I tried:
puppet infrastructure configure --recover
Notice: Unable to recover PE configuration: The Lookup Configuration at '/etc/puppetlabs/code/environments/production/hiera.yaml' has wrong type, unrecognized key 'puppet_enterprise::profile::console::proxy::http_redirect::enable_http_redirect'
2019-05-07T15:41:29.722+00:00 - [Notice]: Compiled catalog for tadm10-adm.test.hfgs.net in environment enterprise in 2.08 seconds
2019-05-07T15:41:42.489+00:00 - [Notice]: Applied catalog in 12.05 seconds
netstat -tulpn | grep -v tcp6|grep ":443\|:80\|:8090\|:444"
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 32272/nginx: master
While I never could figure out how to accomplish this using Puppet Labs suggestion of modifying the hiera.yaml file I have figured out how to do this using the Web Console.
The modifications remove all conflicts with the existing Apache httpd that uses ports 80 and 443.
Access to the PE Web Console will now need to be accessed via port 444
This is the fix:
From the Web Console
Select Configure
Select Classification
Select the + icon labeled "PE Infrastructure" production to display Classes
Select PE Console production link
Select Configuration tab
Under "Classes" section - Add new class
Select "puppet_enterprise::profile::console::proxy::http_redirect" from the list
Select Add class button
Select commit 1 change
New class now shows on the page,
Select parameter name: enable_http_redirect from the list
Set value to false
Add parameter
Select commit 1 change
Select parameter name: ssl_listen_port from the list
Set value to 444
Add parameter
Select commit 1 change
When running puppet agent -t I now get an error as shown below
Duplicate declaration: Class[Puppet_enterprise::Profile::Console::Proxy::Http_redirect] is already declared;
cannot redeclare (file: /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/console/proxy.pp,
line: 211)
Remove the duplicate declaration from proxy.pp
Edit: /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/console/proxy.pp
#class { 'puppet_enterprise::profile::console::proxy::http_redirect' :
# ssl_listen_port => Integer($ssl_listen_port),
#}
Re-run puppet agent -t
puppet agent -t
Console Port(Port 443 change)
From the Web Console
Configure
Classification
Select PE Infrastructure production
Configuration tab
Class: puppet_enterprise::profile::console
Add Parameter
Parameter Name: console_port
Value: 444
Run puppet agent -t and check ports
# puppet agent -t
# netstat -tulpn | grep -v tcp6|grep ":443\|:80\|:8090\|:444"
tcp 0 0 0.0.0.0:444 0.0.0.0:* LISTEN 11182/nginx: master
Start httpd
# systemctl start httpd
# netstat -tulpn | grep -v tcp6|grep ":443\|:80\|:8090\|:444"
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 13353/httpd
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 13353/httpd
tcp 0 0 0.0.0.0:444 0.0.0.0:* LISTEN 11182/nginx: master
Accessing the PE Wen Console is now via port 444
https://hostname:444/#/inspect/overview
Is there is a way to check if localhost is making ftp connection to other server?
The requirement is like this: Local host -> serverA
Remote server --> serverB.
Need to check if serverA is making ftp connection to serverB.
So whenever serverA is making ftp connection to serverB, how to get notified.
I tried like this: ps -ef | grep -i ftp; however since ps process too would get notified, so can't make this use in shell script, is there any better way which checks if serverA is making ftp connections to serverB, and if so, get notified / logs to a file.
Thanks
Your problem of "ps -ef | grep -i ftp" also reporting the 'ps' process is resulting from grep searching the string "ftp". This would also hit a lot of other processes which also have the word 'ftp' in it's command line.
To fix that check if you have the procps tools "pgrep" and "pkill" installed. They are very helpful for 'grepping' processes and running commandlines.
To solve your initial problem you might check if you have the 'ss' (show sockets from iproute2 packages) command installed.
It's output might be useful (11.22.33.44 is you local IP 130.133.3.130 the remote):
root:sigkill:~/# ss -p|cat
State Recv-Q Send-Q Local Address:Port Peer Address:Port
[...]
ESTAB 0 0 11.22.33.44:43681 130.133.3.130:ftp users:(("ftp",19729,4),("ftp",19729,3))
[...]
There are a few approaches that you could take:
You could poll running processes for ftp. This wouldn't catch other FTP clients (if you care about that), and it wouldn't catch very short ftp sessions that slip between polls.
If your system supports execution logging, you could log all executions of ftp. Again, this wouldn't catch other FTP clients.
You could watch for outbound connections on port 21/tcp using some mechanism provided by your system (for instance, on Linux, use an iptables rule that matches outbound FTP connections to any servers that you care about and logs them using the LOG target). This would catch all connections regardless of client, but tracking down the process and user would be a little more complicated.
You can use $ grep ftp /etc/services to list the current ftp connections.
$ grep ftp /etc/services
ftp-data 20/tcp
ftp-data 20/udp
...
ftp 21/tcp
ftp 21/udp fsp fspd
...
sftp 115/tcp
sftp 115/udp
...
ftp-data 20/sctp # FTP
ftp 21/sctp # FTP
...
ftps-data 989/tcp # ftp protocol, data, over TLS/SSL
ftps-data 989/udp # ftp protocol, data, over TLS/SSL
ftps 990/tcp # ftp protocol, control, over TLS/SSL
ftps 990/udp # ftp protocol, control, over TLS/SSL
Use netstat to see the open connections. e.g., For simple FTP...
$ netstat -tan | grep \:21
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN
tcp 0 0 :::21 :::* LISTEN
I'm trying to write the linux client script for a simple port knocking setup. My server has iptables configured to require a certain sequence of TCP SYN's to certain ports for opening up access. I'm able to successfully knock using telnet or manually invoking netcat (Ctrl-C right after running the command), but failing to build an automated knock script.
My attempt at an automated port knocking script consists simply of "nc -w 1 x.x.x.x 1234" commands, which connect to x.x.x.x port 1234 and timeout after one second. The problem, however, seems to be the kernel(?) doing automated SYN retries. Most of the time more than one SYN is being send during the 1 second nc tries to connect. I've checked this with tcpdump.
So, does anyone know how to prevent the SYN retries and make netcat simply send only one SYN per connection/knock attempt? Other solutions which do the job are also welcome.
Yeah, I checked that you may use nc too!:
$ nc -z example.net 1000 2000 3000; ssh example.net
The magic comes from (-z: zero-I/O mode)...
You may use nmap for port knocking (SYN). Just exec:
for p in 1000 2000 3000; do
nmap -Pn --max-retries 0 -p $p example.net;
done
try this (as root):
echo 1 > /proc/sys/net/ipv4/tcp_syn_retries
or this:
int sc = 1;
setsockopt(sock, IPPROTO_TCP, TCP_SYNCNT, &sc, sizeof(sc));
You can't prevent the TCP/IP stack from doing what it is expressly designed to do.
I am new to using ldap and slapd and I am having some trouble getting my client machine to connect to the server that is hosting slapd.
here is the run down:
on a ubuntu box I have an instance of virtualbox running a vm with CentOS. I have installed and configured slapd on the CentOS vm and as long as I am on the vm I can use the ldapsearch, ldapadd, etc. once I move to the client machine (the ubuntu distro housing the vm) I run the following:
ldapsearch -x -LLL -b 'dc=example,dc=com' 'uid=Al' -d 255 -H ldap://192.168.1.73:389/
and the following is what I get
ldap_url_parse_ext(ldap://192.168.1.73:389/)
ldap_create
ldap_url_parse_ext(ldap://192.168.1.73:389/??base)
ldap_pvt_sasl_getmech
ldap_search
put_filter: "(objectclass=*)"
put_filter: simple
put_simple_filter: "objectclass=*"
ldap_build_search_req ATTRS: supportedSASLMechanisms
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP 192.168.1.73:389
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 192.168.1.73:389
ldap_pvt_connect: fd: 3 tm: -1 async: 0
ldap_close_socket: 3
ldap_msgfree
ldap_err2string
ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1)
I can connect to the vm via ssh and run the ldapsearch, so connecting shouldn't be an issue. I have configured the router to make the machines ip's static (both the vm and the physical)
any help I could get would be very appreciated.
Thanks,
Al
Firewall? It wouldn't be inconceivable that out of the box the firewall would allow ssh through but not ldap. You also need to verify that your ldap server is configured to listen on the outside interface and not just the loop back. Openldap logging can also be setup to be very verbose about the connections it is receiving. You should do that and monitor your syslog while attempting to connect. That should give you enough information to figure out where the connection is being blocked.
I have box A and it has a consumer on it that listens on a Rabbit MQ server
I have box B that will publish a message to the listener
So as long as all of this in on box A and I start Rabbit MQ server w/ defaults it works fine.
The defaults are host=127.0.0.1 on port 5672, but
when I telnet box.a.ip.addy 5672 from box B I get:
Trying box.a.ip.addy...
telnet: connect to address box.a.ip.addy: No route to host
telnet: Unable to connect to remote host: No route to host
telnet on port 22 is fine, I can ssh into Box A from Box B
So I assume I need to change the ip that the RabbitMQ server uses
I found this: http://www.rabbitmq.com/configure.html and I now have a config file in the location the documentation said to use, with the name rabbitmq.config and it contains:
[
{rabbit, [{tcp_listeners, {"box.a.ip.addy", 5672}}]}
].
So I stopped the server, and started RabbitMQ server again. It failed. Here are the errors from the error logs. It's a little over my head. (in fact most of this is)
=ERROR REPORT==== 23-Aug-2011::14:49:36 ===
FAILED
Reason: {{case_clause,{{"box.a.ip.addy",5672}}},
[{rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1},
{rabbit_networking,boot_tcp,0},
{rabbit_networking,boot,0},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
{rabbit,run_boot_step,1},
{rabbit,'-start/2-lc$^0/1-0-',1},
{rabbit,start,2},
{application_master,start_it_old,4}]}
=INFO REPORT==== 23-Aug-2011::14:49:37 ===
application: rabbit
exited: {bad_return,{{rabbit,start,[normal,[]]},
{'EXIT',{rabbit,failure_during_boot}}}}
type: permanent
and here is some more from the start up log:
Erlang has closed
Error: {node_start_failed,normal}
^M
Crash dump was written to: erl_crash.dump^M
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}})^M
Please help
did you try adding?
RABBITMQ_NODE_IP_ADDRESS=box.a.ip.addy
to the /etc/rabbitmq/rabbitmq.conf file?
Per http://www.rabbitmq.com/configure.html#customise-general-unix-environment
Also per this documentation it states that the default is to bind to all interfaces. Perhaps there is a configuration setting or environment variable already set in your system to restrict the server to localhost overriding anything else you do.
UPDATE: After reading again I realize that the telnet should have returned "Connection Refused" not "No route to host." I would also check to see if you are having a firewall related issue.
You need to open up the tcp port on your firewall
Using Linux, Find the iptables config file:
eric#dev ~$ find / -name "iptables" 2>/dev/null
/etc/sysconfig/iptables
Edit the file:
sudo vi /etc/sysconfig/iptables
Fix the file by adding a port:
# Generated by iptables-save v1.4.7 on Thu Jan 16 16:43:13 2014
*filter
-A INPUT -p tcp -m tcp --dport 15672 -j ACCEPT
COMMIT