Logstash shipper & server on the same box - syslog

I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents.
I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM.
In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch.
My question:
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Diagram of my setup:
[client]-------syslog-ng---> [log server] ---syslog-ng <----logstash-shipper ---> redis <----logstash-server ----> elastic-search <--- kibana

Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)?
Yes.
Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Not that I have seen yet.
Why would you want this? I have a single machine and remote machine config and they work extremely reliably, with a small footprint. Maybe you could explain your reasoning a bit - I know I would be interested to hear about it.

Related

Is it normal for my router to have the activity on port 111?

What are typical results of nmap 198.168.1.1 for an average Joe? What would be a red flag?
PORT STATE SERVICE
111/tcp filtered rpcbind
What does this mean in context and is it something to worry about?
Basically, RCPBind is a service that enables file sharing over NFS,The rpcbind utility is a server that converts RPC program numbers into universal addresses. It must be running on the host to be able to make RPC calls on a server on that machine. When an RPC service is started, it tells rpcbind the address at which it is listening, and the RPC program numbers it is prepared to serve So if you have the use for file sharing, It's fine, otherwise unneeded and are a potential security risk.
You can disable them by running the following commands as root:
update-rc.d nfs-common disable
update-rc.d rpcbind disable
That will prevent them from starting at boot, but they will continue if already running until you reboot or stop them yourself.
And if you are looking to get into the system through this, There are lots of reading material available in the google.

Trouble connecting to gRPC server on AWS Fargate

I have a Python gRPC server running on AWS Fargate (configured very similar to this AWS guide here), and another AWS Fargate task (call it the "client") that attempts to make a connection to my gRPC server (also using Python gRPC). However, the client is unable to make a call to my server, with the following error:
<_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1619057124.216955000","description":"Failed to pick subchannel",
"file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":5397,
"referenced_errors":[{"created":"#1619057124.216950000","description":"failed to connect to all addresses",
"file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc",
"file_line":398,"grpc_status":14}]}"
Based on my reading online, it seems like there are myriad situations in which this error is thrown, and I'm having trouble figuring out which one pertains to my case. Here is some additional information:
When running client and server locally, I am able to successfully connect by having the client connect to localhost:[PORT]
I have configured an application load balancer target group following the guide from AWS here that makes health check requests to the / route of my gRPC server, using the gRPC protocol, and expect gRPC response code 12 (UNIMPLEMENTED); these health check requests are coming back as expected, which I believe implies the load balancer is able to successfully communicate with the server (although I could be misunderstanding)
I configured a service discovery system (following this guide here) that should allow me to reach my gRPC server within my VPC via the name service-name.dev.co.local. I can confirm that the corresponding DNS record exists in Route 53, and when I SSH into my VPC, I am indeed able to ping service-name.dev.co.local successfully.
Anyone have any ideas? Would appreciate any and all advice, and I'm happy to answer any further questions.
Thank you for your help!
on your grpc server use 0.0.0.0:[port] and expose this port with TCP on your container.

corda CENM networkmap server start failing to connect database after a few week run

we operate CENM(1.2 and use helm template to run on k8s cluster) to construct our own private network and keep on running CENM network map server for a few week, then launching new node start failing.
with further investigation, its appeared that request timeout for http://nmap:10000/network-map causes problem.
in nmap server’s log, we found following output when access to above url with curl.
[NMServer] - Error while handling socket client message com.r3.enm.servicesapi.networkmap.handlers.LatestUnsignedNetworkParametersRetrievalMessage#760c53ea: HikariPool-1 - Connection is not available, request timed out after 30000ms.
netstat shows there is at least 3 establish connection to the database from the container which network map server runs, also I can connect database directly with using CLI.
so I don’t think it is neither database saturated nor network configuration problem.
anyone have an idea why this happens? I think restart probably solve the problem, but want to know the root cause...
regards,
Please test the following options.
Since it is the HikariCP (connection pool) component that is throwing the error it would be worth seeing if increasing the pool size in the network map configuration may help - see below)
Corda uses Hikari Pool for creating the connection pool. To configure the connection pool any custom properties can be set in the dataSourceProperties section.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
...
maximumPoolSize = 10
connectionTimeout = 50000
}
Has a healthcheck been conducted to verify there are sufficient resources on that postgres database i.e basic diagnostic checks ?
Another option to get more information logged from the network map service is to run with TRACE logging also:
From https://docs.corda.net/docs/cenm/1.2/troubleshooting-common-issues.html
Enabling debug/trace logging
Each service can be configured to run with a deeper log level via command line flags passed at startup:
java -DdefaultLogLevel=TRACE -DconsoleLogLevel=TRACE -jar <enm-service-jar>.jar --config-fi

Listen voice messages on server A from server B with asterisk

I have RealTime asterisk with 3 servers. In database I hold sippears only and voicemail boxes. Voicemail messages are stored on the system FILE_STORAGE.
Server A and B are for calls and sip registrations and Server C is dundi.
Currently everything work fine.. I can call from Server A to Server B. The problem is when I leave message to number who is busy and registered on Server B.. then this number disconnect and register on Server A -> he can't listen the messages because it is stored on Server B..
How can I make any user to be able to listen his messages no matter on which server are?
You have alot of options, most of each in clustering area.
Simplest options are:
Glusterfs setup on both server, voicemail in glusterfs directory. This one do failover
NFS/samba share on both servers.
mysql master-master replication, use ODBC_STORAGE, put all voicemails in db. This one is recommended if you also want easy access from web interface to your voice files and simple search/lookup/get message. Highly recommended use innodb tables and optimized mysql config.
Easiest way just for to be able to listen them no matter on which of both servers user is registered is NFS and mounting for example /var/spool/asterisk/. In this case you need to install some additional components.
Here is great tutorial how can you done this:
How to configure an NFS server and mount NFS shares - Ubuntu
Another way if you can make master-slave with two servers in cluster and using rsync . Then you can sync every X minutes/hours/days folder to remote server to keep them in case of failure.
rsync -a local_dir/ user#remote-host-ip:/path/to/dir

Service dies when Nmap is run

I am having a weird problem.
I have a service running on port 8888 on one of my many servers in a cluster.
When I run nmap on my gateway to get all the IPs inside my network, this service miraculously dies. Since nmap does a port scan too, It might have something to do with it. I am not sure.
The nmap command I am using is this:
sudo nmap -oX ${FILE_NAME} ${IP_DOMAIN} -A -O --osscan-guess
Can some tell me what might be happening ?
While Nmap developers try to limit the danger, Nmap scans can still crash services. The most likely culprit for crashing a service (as opposed to crashing an entire machine) is the service version detection scan phase (-sV, implied in your command by -A). This scan sends a series of data packets to the service in an attempt to elicit a response which can be matched against Nmap's database of known services. When a match is found, Nmap stops sending probes. That means that an unknown service can get lots of probes sent to it which contain binary data, command strings, and other data that your service is not expecting.
A well-written network service will not crash on any input; your service has a bug of some sort. Avoiding this sort of crash usually means avoiding scanning that service:
You can use the Exclude directive in your nmap-service-probes data file to instruct Nmap to never send these service probes to port 8888.
You can avoid scanning port 8888 at all by changing the ports you scan with -p. Later versions of Nmap will support the --exclude-ports option, too.
You can make sure you are using the latest version of Nmap. If your service's fingerprint was added to the nmap-service-probes file, then Nmap will stop sending probes when it detects it, which may avoid sending the later probe that crashes it.
You can reduce the intensity of the service scan with the --version-intensity option. This prevents Nmap from sending so many service probes, which may eliminate the one that is crashing your service.
Finally, if this service is a standard one and not something custom to your own network, you can report it to The Network Scanning Watch List so that other users can avoid crashing it as well.

Resources