How to do client authentication in statsD server? - graphite

I have installed StatsD on debian using https://www.digitalocean.com/community/tutorials/how-to-configure-statsd-to-collect-arbitrary-stats-for-graphite-on-ubuntu-14-04
Here using echo "sample.set:50|s" | nc -u -w0 127.0.0.1 8125 we can send log data to StatsD server. Using this anyone can send the logs to server.
How can I restrict to only predefined client such that those who have already registered can send data to StatsD server?

StatsD can't do such thing since the only thing it should do is metric aggregation.
You can write a customized StatsD proxy, as our team did, to finish things like key-routing, authentication and monitoring.
After that you can use something like iptables to only receive packets from StatsD proxy on your StatsD machine.

Related

Verification Gatling is sending events on Certain port

I am sending events through gatling using graphite protocol on default port 2003.All the set up is on local ( including influxdb and grafana as well).Now I want to verify in gatling logs that in actual events are passing through port 2003 .How to verify that ? In gatling debug logs I am not finding anything related with graphite or port 2003.
Please help. Also let me know if you want me to add more info.
I wanted to continue in the previous question... But let's continue then here. To understand what data sends Gatling, you can use the utility netcat / nc.
It will listen incoming on port 2003:
nc -k -l 2003
(don't forget to turn off Influx or pass another port in nc and Gatling's conf)
Also you can emulate Gatling's data without run and send directly to Influx:
echo "gatling.example.get_request.all.percentiles99 155 1615558965" | nc localhost 2003

How to forward data from desktop application through fiddler or mitmproxy?

I am using a win10 desktop app for which I know it is sending TCP packets in order to communicate with the server. The payloads are encrypted. There is a chance that if the app is using TLS, a proxy like mitmproxy or fiddler will be able to decrypt the data.
The app also gets assigned different port every time it launches. So far the only promising information was to use netsh:
netsh interface portproxy add v4tov4 listenport=appPort listenaddress=appLocalIP connectport=fiddlerListeningPort connectaddress=fiddlerLocalIP
I ran this command after the app was already running because I can not determine its local port beforehand. But that did nothing. I was unable to find any other way to force the app to route the traffic through fiddler / mitmproxy.

Ping operation by dynatrace?

The sitescope tool has the functionality for checking the Ping operation, with frequency of pinging the application configurable and email alerts as well.
Does Dynatrace support ping operation and email alerting ?
You can't to a ping from dynatrace, but that is probably not what you want to do anyway, because it just tells you that the host is up and available via ICMP.
What you can do with dynatrace, is execute a synthetic HTTP all against an endpoint on that host to see if your application is up and running.

Best practice for collectd and graphite

What is the best way to send metrics from multiple servers to graphite (will then use Grafana to view graphite data):
1) Install collectd daemon on all servers and send to a collectd central server. Configure collectd server to send data to graphite.
2) Install collectd daemon on each all servers and configure to send metrics directly to graphite server.
Thank you

How to disable HTTP requests to Hadoop RPC port

I have enabled security for my Hadoop cluster and it works fine. But when I visit the link http://namenode_host:8020, it shows:
It looks like you are making an HTTP request to a Hadoop IPC port. This is not the correct port for the web interface on this daemon.
But I don't want such behavior, because it is unencrypted message and the policy of our company is to encrypted the data for all the ports. 8020 is a RPC port of Hadoop. Any idea on how to disable HTTP requests to Hadoop RPC port?
Take a look at the Data Confidentiality section from the apache doc, I think you are looking for the RPC encryption.
8020 - is the default port of Hadoop File System, which listens for the IPC calls from HDFS clients to Hadoop NameNode for HDFS metadata operations. You should not try to access it directly through HTTP. If you want to work with your data on HDFS through web you have to use WebHDFS API which allows to perform web requests upon the data in the file system.

Resources