I have used opentsdb http api like below
http://192.168.1.249:4242/api/query?start=10m-ago&m=sum:rate:swagent.network.send&downsample=interval:10m-avg
It is not working for me.
How to use downsample to reduce opentsdb response?
Try specifying the downsampling function after the aggregator, like,
m=sum:5d-avg:cpu.load.serv1{tag=value}
where 5d-avg is the downsampling function.
Related
I'm having this problem with grafana to query the number of requests incoming to my service.
Using Prometheus-net on my dotnet core Service, I have the "http_requests_received_total" which is a counter metric.
I run a 100 requests to Postman, ideally what I'd like to see is that at 12:20, a 100 requests came in (which is visible from seeing the counter go from 0 requests to 100 requests).
However, when using rate() or increase(), or sum(rate/increase), I keep getting approximate results and it's never an exact 100 requests.
Can anyone point me into a direction on how I can achieve this or read up upon it?
Thanks!
Prometheus may return fractional results from increase function because of extrapolation. See this issue for details. If you need exact integer results from increase() function, then try VictoriaMetrics - this is a Prometheus-like monitoring solution I work on. It returns the expected integer results from the increase() function.
I am using Prometheus to instrument my scala code. It works fine with Counters for most of the app related metrics.
When it comes to measuring latency, I am not sure how to use Summaries or Histograms (or some other metric type) to measure the latency of asynchronous calls.
Timer.observeDuration in a callback does not really do the trick since the Timer is reset multiple times before one aync call is completed.
What approach should I take to measure asynchronous latency using prometheus metrics?
You need to pass around the timer object from where you create it to where the call is finally complete, and only then call observeDuration.
We are trying to stream data from a car's OBD-II protocol via Wireshark. It's working fine and we get the IDs and data payloads out interpreted as CAN. However, we would like to take it a step further and "scale" the data according to the documentation on wikipedia.
This requires that we can use a formula that is dependent on information contained in the ID and in the start of the actual data message.
Could anyone provide some guidance as to how we can create such a scaling/conversion of the data into readable output using Wireshark? Ideally we would also like to inform the viewer of what data, units etc. they're looking at - we have all this info, but we would just need to find a way to return it depending on the ID.
Hope you can help - it would be much appreciated! Martin
I recommend obtaining SAE J1979 and SAE 1979DA. In there you will find the complete information necessary to dissect the protocols, including units and scaling / offset for every standardized PID. Then, codify this into a dissection protocol in Wireshark.
I have a requirement where I need to send HTTP requests to large number of small files (probably many 100 thousands) and I am trying to find an efficient way to create a large nuumber of HTTP Samplers under a thread group.
Is there a way to automate this so that I can create a request in such a way that
http:///folder[index]/file[index]
index can vary from 0..500000
I would like to pump the traffic with GETs on this request.
I believe that JMeter Functions is something which can help you in implementing your scenario.
If that index bit can be a random value in range from zero to 500000 amend your request as follows to use __Random function:
http://folder${__Random(0,500000,)}/file${__Random(0,500000,)}
If you want the index to be consecutive, i.e.
1st request - index=1
2nd request - index=2
etc.
Then __counter function is your friend and path stanza should be something like:
http://folder${__counter(,)}/file${__counter(,)}
See How to Use JMeter Functions post series for more details on the most popular JMeter functions.
I have an application. I will have a situation, wherein I will receive a big array of encoded bytes. I have to decode them and render it. For decoding, I am using a custom decoder class. After the decode, how can I construct a DirectShow graph which will receive input data from the decoder? Please give some direction/samples on this.
Have a look at the PushSource sample in the DirectShow SDK. This sample shows you how to create a source filter that can be rendered. It is all about setting the output media type of your filter correctly so that the rest of the graph can be rendered. The sample also shows you how to feed media samples to the rest of the media pipeline. In your case what do you decode to? The PushSource sample outputs RGB24 IIRC.
Also, it sounds like you're decoding in the same filter as your receiving the bytes in? Typically in DirectShow you would write a source filter that is able to receive bytes from the network and outputs samples in the encoded format. You would then connect this filter to a custom decoder filter, that then outputs either RGB24 or some raw media format that is understood by DirectShow. Similarly for audio, you could output say, PCM.
Edit:
I have used the same approach (CSource, CSourceStream). That is correct, the DoBufferProcessingLoop calls FillBuffer. My general approach has been to use the producer-consumer pattern. The networking-reading thread populates the queue with samples and in my overridden DoBufferProcessingLoop I check whether the queue has any data, calling FillBuffer if there is data. You can of course try other methods such as waiting on events (frame availibility). To see the approach I used you can download the source code of an example RTSP source filter at http://sourceforge.net/projects/videoprocessing/ and see if that suits you. Best thing I would say is to just try stuff and learn as you go along.