Graphite not accepting values in the future - graphite

I'm investigating Graphite as a possible TSDB database.
The problem I have is that Graphite is not writing values that are in the future? Am I missing something?
Simple test with nc
echo "psoll 20 1570560798" | nc localhost 2003 -D
The above given unix timestamp has to be in the future.

Related

Is there a very simple graphite tutorial available somewhere?

Given that I have Graphite installed within Docker, does anyone know of a very simple graphite tutorial somewhere that shows how to feed in data, then plot the data on a graph in Graphite Webapp? I mean the very basic things and not the endless configurations and pages after pages of setting various components up.
I know there is the actual Graphite documentation but it is setup after setup after setup of the various components. It is enough to drive anyone away from using Graphite.
Given that Graphite is running within Docker, as a start I just need to know the step of feeding in data using text, display the data in Graphite Web App, and query the data back.
I suppose that you containerized and configured all the graphite components.
First, be sure that you published plaintext and pickle port if you plan to feed graphite from the local or external host. (default: 2003-2004 )
After that, according to the documentation you can perform a simple Netcat command to send metrics over TCP/UDP to carbon with the format <metric path> <metric value> <metric timestamp>
while true; do
echo "local.random.diceroll $RANDOM `date +%s`" | nc -q 1 ${SERVER} ${PORT}
done
You should see in graphite-web GUI the path local/rando/diceroll generated with a graph of random integers.
Ref: https://graphite.readthedocs.io/en/latest/feeding-carbon.html

I want a proxy in Apigee to call itself every 1 minute

I have a proxy in apigee.I want that proxy to call itself every 1 minute.
Can anyone help me with simple policy/javascript policy so that it calls itself every 1 minute?
Why does google has no auto call function?
It is not a good practice to call API Proxy continuously from same proxy or from different proxy in Apigee. Doing so you may bring down Server if you are dealing with large data.
However, you can do outside of Apigee by running Jenkins job which calls your proxy every minute or creating a unix or batch script and run it as a corn job.
Shell script: On unix create a test.sh file and add curl command and any other instructions you want. Follow below article for more details on curl.
https://www.thegeekstuff.com/2012/04/curl-examples/?utm_source=feedburner
Now, Schedule it using corn job. below post describes how to do it.
https://askubuntu.com/questions/852070/automatically-run-a-command-every-5-minutes/852074
Thanks

calculate network traffic per process zabbix

I'm using Zabbix 3.2. I want to calculate the traffic statistics on network interface based on the program name.
Like for getting total incoming traffic, we use net.if.in[if,] , by same way is it possible to retreive traffic utilized by each running process like in Nethogs. If so, pls share the Item key. Or, if there is any sh script to do the same.
Thanks in advance.
You haven't specified the operating system, but the question is tagged 'unix' and you mention nethogs and shell scripts - I'll assume Linux.
It might be a bit too much to monitor traffic for all of the processes - there could be hundreds of them, and even though many would not use the network, on a server system many would.
It is also important how you want to structure the data. For example, do you want to split it up per process name, or per individual process? Or maybe even process name and its parameters - in case of running several Java JVMs on the same box. You would have to decide on all this, as it will affect the data collection.
As sending data to Zabbix, the simplest way on the Zabbix side would be monitoring by process name only, and creating items in advance, if you know all the process names you will be interested in. If you do not know them, you will have to use Zabbix low level discovery to automatically create items as new processes appear.
And we finally get to the data collection part. Here, it indeed might be the easiest to use nethogs (keeping in mind that UDP is not supported). You can run nethogs in "trace" mode, which is pretty much the same as the "batch" mode for top. In this mode, output is simply printed to stdout.
nethogs -c 1 -d 60 -t
Here, the parameters mean:
-c - how many times to print output
-d - for how long to sleep between iterations, including the time before the first output
-t - tracing or batch mode
Nethogs also supports setting traffic output type with the -v flag. You'd have to decide how you want to visualise this:
0 - KB/s
1 - total KB
2 - total B
3 - total MB
With Zabbix, you probably will not want to use modes 1 or 3 - it is better to store data in bytes and allow Zabbix to add the multiplier as needed. In case of the KB/s mode (0), it is probably worth adding an item multiplier of 1024 to store data in bytes and again benefiting from the automatic unit application at Zabbix. Note that in any case you will want to run nethog instances back-to-back, to avoid windows where you are not collecting data. One way to minimise possibility of any windows would be running nethogs constantly (without supplying -c option) and redirecting output to a file. A script would then parse the file and send the data to Zabbix with zabbix_sender.
You wouldn't run this as a normal Zabbix user parameter, neither as an active, nor passive check - it would block for too long. Consider using atd (see this howto) or nohup to launch a script that sends data to Zabbix with zabbix_sender instead.
Note that you must run nethogs as root - use sudo for that.
I'm not aware of any existing scripts for this, but the following might get you started:
nethogs -c 1 -d 1 -t | awk 'BEGIN {FS="[[:space:]/]+"}; /Refreshing/,0 \
{if ($1 != "Refreshing:" && $1 != "unknown") {print $(NF-4), $(NF-1), $NF}}'
Here, awk grabs only program lines and prints out program name and sent/received traffic.

Protect/encrypt R package code for distribution [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am writing a package in R and would like to protect/crypt the code. Basically, when looking into my package code, it should be crypted and not readable. I have read that someone has crypted his code(1), however I haven't found any more information about that. I know I could just write the code in C/C++ and compile it, however I would like to let it in R and just "protect" it there.
My question is: Is this possible, how is this possible?
I appreciate your answer!
Reference:
(1) link
Did you try following that thread?
https://stat.ethz.ch/pipermail/r-help/2011-July/282717.html
At some point the R code has to be processed by the R interpreter. If you give someone encrypted code, you have to give them the decryption key so R can run it. Maybe you can hide the key somewhere and hope they don't find it. But they must have access to it in order to generate the plain text R code somehow.
This is true for all programs or files that you run or view on your computer. Encrypted PDF files? No, they are just obfuscated, and once you find the decryption keys you can decrypt them. Even code written in C or C++ distributed as a binary can be reverse engineered with enough time, tools, and clever enough hackers.
You want it protected, you keep it on your servers, and only allow access via a network API.
I recently had to do something similar to this and it wasn't easy. But i managed to get it done. Obfuscating and/or encrypting scripts is possible. The question is, do you have the time to devote to it? You'll need to make sure whichever "obfuscation/encryption" method you use is very difficult and time consuming to crack, and that it doesn't slow down the execution time of the script.
If you wish to encrypt a Rscript code fast, you can do so using this site.
I tested the following rcode using the aforementioned site and it produced a very intimidating output, which somehow worked:
#!/usr/bin/env Rscript
for (i in 1:100){
if (i%%3==0) if (i%%5==0) print("fizzbuzz") else print("fizz") else
if (i%%5==0) print("buzz") else
print(i)
}
If you do have some time on your hands and you wish to encrypt your script on your own, using your own improvised method, you'll want to use the openssl command. Why? Because it appears to be the one encryption tool that is available across most, if not all Unix systems. I've verified it exists on Linux (ubuntu, centos, redhat, mac), and AIX.
The simplest way to use Openssl to encrypt a file or script is:
1. cat <your-script> | openssl aes-128-cbc -a -salt -k "specify-a-password" > yourscript.enc
OR
2. openssl aes-128-cbc -a -salt -in <path-to-your-script> -k "yourpassword"
To decrypt a script using Openssl (notice the '-d'):
1. cat yourscript.enc | openssl aes-128-cbc -a -d -salt -k "specify-a-password" > yourscript.dec
OR
2. openssl aes-128-cbc -a -d -salt -in <path-to-your-script> -k "yourpassword" > yourscript.dec
The trick here would be to automate the supply of password so your users need not specify a password each time they wanted to run the script. Or maybe that's what you want?

What is the average capacity of a postfix mail server? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I appreciate there is no 'set' answer to this question. I am trying to assess the performance of our dedicated mail server for sending out emails. The server is of the spec below:
2G RAM
CPU Xeon 2.80GHz (x2)
Currently we're only managing to send out approximately 21,000 emails per hour from this. Which I suspect is massively under-performing.
Are there any guidelines as to what capacity can be expected?
Actually it also depends on the configuration. For exxample, if you use amavis, spamassassin or clam or another content filter it will directy affect the performance.
If you do no use any contentfilter, you should have capacity limit higher then 21,000 emails/hour.
Another point is queue size. If you have a growing queue you have a problem. If the queue is steady, no need to worry. Check queue size with "mailq | tail -1"
Check some params:
default_destination_concurrency_limit = 40
initial_destination_concurrency = 5
lmtp_destination_concurrency_limit = $default_destination_concurrency_limit
local_destination_concurrency_limit = 10
relay_destination_concurrency_limit = $default_destination_concurrency_limit
smtp_destination_concurrency_limit = $default_destination_concurrency_limit
virtual_destination_concurrency_limit = 35
Check master.cf
smtp inet n - n - 300 smtpd
smtp unix - - n - - smtp
smtpd is for incoming limit
smtp is for outgoing. If 7th field has a value this will limit concurrent server processes.
You can check google for further analysis.
http://www.google.com.tr/search?q=postfix+performance&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:tr:official&client=firefox-a
Use current network bandwidth and CPU usage to determine the capacity. If you are using 25% percent of the bandwidth and CPU then you should be able to get at least 42 000 emails per hour. (I just doubled to be on the safe side)

Resources