Network graphs are flat in Grafana using Telegraf (with InfluxDB) - graph
I have a Grafana dashboard with metrics collected in InfluxDB by Telegraf. The issue I encounter is related to the network graphs that are flat.
My telegraf.conf includes the net plugin :
[[inputs.net]]
And a test returns data :
$ telegraf -config /etc/telegraf/telegraf.conf -input-filter net -test
* Plugin: net, Collection 1
> net,interface=eth0 bytes_recv=48497859793i,bytes_sent=68085171005i,drop_in=0i,drop_out=0i,err_in=0i,err_out=0i,packets_recv=65927848i,packets_sent=69072905i 1453196173154147048
> net icmp_inaddrmaskreps=0i,icmp_inaddrmasks=0i,icmp_incsumerrors=65i,icmp_indestunreachs=264807i,icmp_inechoreps=38i,icmp_inechos=1077178i,icmp_inerrors=4559i,icmp_inmsgs=1342870i,icmp_inparmprobs=0i,icmp_inredirects=6i,icmp_insrcquenchs=2i,icmp_intimeexcds=774i,icmp_intimestampreps=0i,icmp_intimestamps=0i,icmp_outaddrmaskreps=0i,icmp_outaddrmasks=0i,icmp_outdestunreachs=849867i,icmp_outechoreps=1077178i,icmp_outechos=3i,icmp_outerrors=0i,icmp_outmsgs=1928597i,icmp_outparmprobs=0i,icmp_outredirects=0i,icmp_outsrcquenchs=0i,icmp_outtimeexcds=1549i,icmp_outtimestampreps=0i,icmp_outtimestamps=0i,icmpmsg_intype0=38i,icmpmsg_intype11=774i,icmpmsg_intype3=264807i,icmpmsg_intype4=2i,icmpmsg_intype5=6i,icmpmsg_intype8=1077178i,icmpmsg_outtype0=1077178i,icmpmsg_outtype11=1549i,icmpmsg_outtype3=849867i,icmpmsg_outtype8=3i,ip_defaultttl=64i,ip_forwarding=2i,ip_forwdatagrams=0i,ip_fragcreates=17072i,ip_fragfails=0i,ip_fragoks=8536i,ip_inaddrerrors=0i,ip_indelivers=77465764i,ip_indiscards=0i,ip_inhdrerrors=0i,ip_inreceives=79567433i,ip_inunknownprotos=0i,ip_outdiscards=108775i,ip_outnoroutes=27i,ip_outrequests=70951694i,ip_reasmfails=52285i,ip_reasmoks=1327353i,ip_reasmreqds=2706991i,ip_reasmtimeout=44473i,tcp_activeopens=872419i,tcp_attemptfails=126726i,tcp_currestab=23i,tcp_estabresets=78613i,tcp_incsumerrors=0i,tcp_inerrs=90i,tcp_insegs=43809023i,tcp_maxconn=-1i,tcp_outrsts=113744i,tcp_outsegs=56961459i,tcp_passiveopens=1065318i,tcp_retranssegs=354967i,tcp_rtoalgorithm=1i,tcp_rtomax=120000i,tcp_rtomin=200i,udp_ignoredmulti=0i,udp_incsumerrors=0i,udp_indatagrams=33110797i,udp_inerrors=36303i,udp_noports=232164i,udp_outdatagrams=27459622i,udp_rcvbuferrors=36303i,udp_sndbuferrors=0i,udplite_ignoredmulti=0i,udplite_incsumerrors=0i,udplite_indatagrams=0i,udplite_inerrors=0i,udplite_noports=0i,udplite_outdatagrams=0i,udplite_rcvbuferrors=0i,udplite_sndbuferrors=0i 1453196173155777308
Am I missing something?
Thanks
You're graphing the total number of bytes/packets sent/received since the interface was started. This number will only ever increase. If you want to see these as a rate (eg bytes per second) then you will need to use Influx's DERIVATIVE(1s) function:
SELECT derivative("bytes_recv", 1s) FROM "net" WHERE "host" = '1.2.3.4' AND "interface" = 'eth0'
Related
How to change the interval of a plugin in telegraf?
Using: telegraf version 1.23.1 Thats the workflow Telegraf => Influx => Grafana. I am using telegraf to check my metrics on a shared server. So far so good, i already could initalize the Telegraf uWSGI Plugin and display the data of my running django projects in grafana. Problem Now i wanted to check some folder size too with the [[inputs.filecount]] Telegraf Plugin and this works also well. However i do not need Metrics for every 10s for this plugin. So i change the interval like mentioned in the Documentation in the [[inputs.filecount]] Plugin. telegraf.conf [agent] interval = "10s" round_interval = true metric_batch_size = 1000 metric_buffer_limit = 10000 collection_jitter = "5s" flush_interval = "10s" flush_jitter = "0s" #... PLUGIN [[inputs.filecount]] # set different interval for this input plugin every 10min interval=“600s” collection_jitter=“20s” # Default from Doc => directories = ["/home/myserver/logs", "/home/someName/growingData, ] name = "*" recursive = true regular_only = false follow_symlinks = false size = "0B" mtime = "0s" After restarting Telegram with Supervisor it crashed because it could not parse the new lines. supervisor.log Error running agent: Error loading config file /home/user/etc/telegraf/telegraf.conf: Error parsing data: line 208: invalid TOML syntax So that are these lines i added because i thought that is how the Doc it mention it. telegraf.conf # set different interval for this input plugin every 10min interval=“600s” collection_jitter=“20s” Question So my question is. How can i change or setup the interval for a single input plugin in telegraf? Or do i have to apply a different TOML syntax like [[inputs.filecount.agent]] or so? I assume that i do not have to change any output interval also? Because i assume even though its currently 10s, if this input plugin only pulls/inputs data every 600s it should not matter, some flush cycle will push the Data to influx .
How can i change or setup the interval for a single input plugin in telegraf? As the link you pointed to shows, individual inputs can set the interval and collection_jitter options. There is no difference in the TOML syntax for example I can do the following for the memory input plugin: [[inputs.mem]] interval="600s" collection_jitter="20s" I assume that i do not have to change any output interval also? Correct, these are independent of each other. line 208: invalid TOML syntax Knowing what exactly is on line 208 and around that line will hopefully resolve your issue and get you going again. Also make sure your quotes that you used are correct. Sometimes when people copy and paste quotes they get ” vs " which can cause issues!
How to simulate network issues such as packet loss, delay when streaming audio?
I have audio files and I want to apply some network issues on it such as packet loos, jitter, delay.. I need emulator to apply this network conditions on my audio file.. please what is best and appropriate emulator for my work and can install it on windows...
Assuming you are looking on how to achieve this, as the title suggest and do not want to start an emulator/simulator hate-war here. I suggest you take a look at mininet. It has got a command line interface for simple tasks and a python API that will let you customise most aspects of the network (such as bandwidth capacity, latency, loss rate etc). All you need to do is setup two (or more) hosts, connect them with a link and configure the link properties. This is an easy way to get varying latency as you requested. Additionally, you can set a link-loss percentage 0-100%. But if you are looking into dropping a specific packet you will either have to do that at the hosts themselves with help of the transport protocol, or create a custom controller (switch implementation). Here is a snippet using the python API that might get you started: from mininet.topo import Topo from mininet.net import Mininet from mininet.node import CPULimitedHost from mininet.link import TCLink from mininet.util import irange class MySingleSwitchTopo( Topo ): "Single switch connected to k hosts." def build( self, k=2, **_opts ): "k: number of hosts" self.k = k switch = self.addSwitch( 's1' ) for h in irange( 1, k ): host = self.addHost( 'h%s' % h ) self.addLink( host, switch, loss=0.1, latency=20, bw=5 ) # create a link with 20ms latency, 0.1% loss chance and 5Mb bandwidth capacity if __name__ == '__main__': topo = MySingleSwitchTopo() net = Mininet(topo=topo, host=CPULimitedHost, link=TCLink) net.start() h1, h2 = net.get('h1', 'h2') h1.cmd('iperf -s&') # Replace with your audio server app out = h2.cmd('iperf -c ' + h1.IP()) # Replace with your audio client app print(out) # output from the client app net.stop()
How to know the netwrok traffic my test (using JMeter) is going to generate?
I am going to run load test using JMeter over Amazon AWS and I need to know before starting my test how much traffic is it going to generate over network. The criteria that Amazon has in their policy is: sustains, in aggregate, for more than 1 minute, over 1 Gbps (1 billion bits per second) or 1 Gpps (1 billion packets per second). If my test is going to exceed this criteria we need to submit a form before starting the test. so how can I know if the test is going to exceed this number or not?
Run your test with 1 virtual user and 1 iteration in command-line non-GUI mode like: jmeter -n -t test.jmx -l result.csv To get an approximate figure open Open the result.csv file using Aggregate Report listener and there you will have 2 columns: Received KB/sec and Sent KB/sec. Multiply it by the duration of your test in seconds and you will get the number you're looking for. alternatively you can open the result.csv file using MS Excel or LibreOffice Calc or equivalent where you can sum bytes and sentBytes columns and get the traffic with 1 byte precision:
Is there a restriction on the no. of HERE API Calls I can make in a loop (using R)
I am trying to loop through a list of origin destination lat long locations to get the transit time. I am getting the following error when I loop. However when I do a single call (without looping), I get an output without error. I use the freemium HERE-API and I am allowed 250k transactions a month. `for (i in 1:nrow(test)) { call <- paste0("https://route.api.here.com/routing/7.2/calculateroute.json", "?app_id=","appid", "&app_code=","appcode", "&waypoint0=geo!",y$dc_lat[i],",",y$dc_long[i], "&waypoint1=geo!",y$store_lat[i],",",y$store_long[i], "&mode=","fastest;truck;traffic:enabled", "&trailerscount=","1", "&routeattributes=","sh", "&maneuverattributes=","di,sh", "&limitedweight=","20") response <-fromJSON(call, simplify = TRUE) Traffic_time = (response[["response"]][["route"]][[1]][["summary"]][["trafficTime"]]) / 60 Base_time = (response[["response"]][["route"]][[1]][["summary"]][["baseTime"]]) / 60 print(Traffic_time) }` Error in file(con, "r"): cannot open the connection to 'https://route.api.here.com/routing/7.2/calculateroute.json?app_id=appid&app_code=appcode&waypoint0=geo!45.1005200,-93.2452000&waypoint1=geo!45.0978500,-95.0413620&mode=fastest;truck;traffic:enabled&trailerscount=1&routeattributes=sh&maneuverattributes=di,sh&limitedweight=20' Traceback:
As per the error, this suggests that there is problem with the file at your end. it could be corrupt, good to try with changing the extension of the file. Can also try to restart your IDE. The number of API calls depend on the plans that you have opted for freemium or pro plans. You can have more details : https://developer.here.com/faqs
Issue while ingesting a Titan graph into Faunus
I have installed both Titan and Faunus and each seems to be working properly (titan-0.4.4 & faunus-0.4.4) However, after ingesting a sizable graph in Titan and trying to import it in Faunus via FaunusFactory.open( ) I am experiencing issues. To be more precise, I do seem to get a faunus graph from the call FaunusFactory.open( ), faunusgraph[titanhbaseinputformat->titanhbaseoutputformat] but then, even asking a simple g.v(10) I do get this error: Task Id : attempt_201407181049_0009_m_000000_0, Status : FAILED com.thinkaurelius.titan.core.TitanException: Exception in Titan at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.getAdminInterface(HBaseStoreManager.java:380) at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.ensureColumnFamilyExists(HBaseStoreManager.java:275) at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.openDatabase(HBaseStoreManager.java:228) My property file is taken straight out of the Faunus page with Titan-HBase input, except of course changing the url of the hadoop cluster: faunus.graph.input.format=com.thinkaurelius.faunus.formats.titan.hbase.TitanHBaseInputFormat faunus.graph.input.titan.storage.backend=hbase faunus.graph.input.titan.storage.hostname= my IP faunus.graph.input.titan.storage.port=2181 faunus.graph.input.titan.storage.tablename=titan faunus.graph.output.format=com.thinkaurelius.faunus.formats.titan.hbase.TitanHBaseOutputFormat faunus.graph.output.titan.storage.backend=hbase faunus.graph.output.titan.storage.hostname= IP of my host faunus.graph.output.titan.storage.port=2181 faunus.graph.output.titan.storage.tablename=titan faunus.graph.output.titan.storage.batch-loading=true faunus.output.location=output1 zookeeper.znode.parent=/hbase-unsecure titan.graph.output.ids.block-size=100000 Anyone can help? ADDENDUM: To address the comment below, here is some context: as I have mentioned, I have a graph in Titan and can perform basic gremlin queries on it. However, I do need to run a gremlin global query which, due to the size of the graph, needs Faunus and its underlying MR capabilities. Hence the need to import it. The error I get doesn't look to me as if it points to some inconsistency in the graph itself.
I'm not sure that you have your "flow" of Faunus right. If your end result is to do a global query of the graph, then consider this approach: pull your graph to sequence file issue your global query over the sequence file More specifically create hbase-seq.properties: # input graph parameters faunus.graph.input.format=com.thinkaurelius.faunus.formats.titan.hbase.TitanHBaseInputFormat faunus.graph.input.titan.storage.backend=hbase faunus.graph.input.titan.storage.hostname=localhost faunus.graph.input.titan.storage.port=2181 faunus.graph.input.titan.storage.tablename=titan # hbase.mapreduce.scan.cachedrows=1000 # output data (graph or statistic) parameters faunus.graph.output.format=org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat faunus.sideeffect.output.format=org.apache.hadoop.mapreduce.lib.output.TextOutputFormat faunus.output.location=snapshot faunus.output.location.overwrite=true In Faunus, copy do: g = FaunusFactory.open('hbase-seq.properties') g._() That will read the graph from hbase and write it to sequence file in HDFS. Next, create: seq-noop.properties with these contents: # input graph parameters faunus.graph.input.format=org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat faunus.input.location=snapshot/job-0 # output data parameters faunus.graph.output.format=com.thinkaurelius.faunus.formats.noop.NoOpOutputFormat faunus.sideeffect.output.format=org.apache.hadoop.mapreduce.lib.output.TextOutputFormat faunus.output.location=analysis faunus.output.location.overwrite=true The above configuration will read your sequence file from the previous step and without re-writing the graph (that's what NoOpOutputFormat is for). Now in Faunus do: g = FaunusFactory.open('seq-noop.properties') g.V.sideEffect('{it.degree=it.bothE.count()}').degree.groupCount() This will execute a degree distribution, writing the results in HDFS to the 'analysis' directory. Obviously you can do whatever Faunus-flavored Gremlin you want here - I just wanted to provide an example. I think this is a pretty standard "flow" or pattern for using Faunus from a graph analysis perspective.