Graph old data using graphite and statsd - graphite

Can I enter timestamp to send data to graphite via statsd(javascript statsd)? I need to graph old data.

No, you can't do that with statsd, however you can achieve the same by sending your data directly to carbon which accepts tiemstamps.
Statsd just collect real time data and on a configured period sums or average each metric received on that period and send it to graphite carbon daemon with current timestamp.
Sending data to carbon daemon it's very straight forward, you just need to open a socket to carbon common port (has another port if you want to use pickle), and then print on that socket one metric per line with following values:
metric_name metric_value metric_timestamp
Carbon will store that value in that timestamp, and you can use any timestamp you want as long as it's in the range configured on the storage of that metric.
There are many examples around, like this one to send with netcat
There's also a Graphite client written in C

I wanted to use statsd but not in real time, because I process log files once an hour. So I modified the server code to accept a timestamp, and modified the client code to send one. It ended up working for me although it feels very "home grown" and I can't update to newer versions of statsd without extra work. The tricky part is that the server does some aggregation into 10-second buckets. In real time, this is pretty easy to do, but if you are going to accept a timestamp, you have to keep a lot more data around. For me, since my data can only be around an hour old, it wasn't too hard, but my solution doesn't really work for a general case.

Looks like there is a way to send raw data via STATSD but it won't be aggregated:
def send(self, subname, value, timestamp=None):
'''Send the data to statsd via self.connection
:keyword subname: The subname to report the data to (appended to the
client name)
:keyword value: The raw value to send
'''
name = self._get_name(self.name, subname)
return statsd.Client._send(self, {name: '%s|r|%s' % (value, ts)})
see:
http://python-statsd.readthedocs.org/en/latest/_modules/statsd/raw.html
https://github.com/chuyskywalker/statsd/blob/master/README.md

Related

How to send multiple parts of the single data which are generated at different intervals over TCP?

I have a problem where my data source is generating parts of the data in delayed intervals and I want send these data parts over TCP and make sure all the parts are received at once?
If you want all data pieces received at once on the destination then you need to send them that way - queue them locally and send when complete.

MediaRecorder API chunks as an independent videos

I'm trying to build simple app that would stream video from camera using browser to the remote server.
For the camera access from browser I've found a wonderful WebRTC API: getUserMedia.
Now for the streaming it to the server IIUC the best way would be to use some of the WebRTC_API for transporting and then use some server side library to deal with it.
However, at first I went with a bit different approach:
I've user MediaRecorder based on the stream from camera. And then I was setting the timeslice for the MediaRecorder.start() to be few hundred Ms, e.g. 200. And I had some assumptions in wrt MediaRecorder which are not in sync with what I was observing:
I've observed weird behaviour(wrt to my assumptions about MediaRecorder):
If there was only 1 chunk uploaded to server -> it opens just fine.
If there are multiple chunks -> none of them loads correctly, they give errors: Could not determine type of stream. But then if I use ffmpeg to concat all the chunks - resulting file is fine. Same happens if I'm concatenating the blobs from MediaRecorder.ondataavailable on the client.
Thus the question:
Can the chunks in theory be independent video files? Or it is not what MediaRecorder was designed for? If it is not - then why do we even have the option to give timeslice parameter in its start() method?
Bonus question
If we're setting timeslice comparatively small, e.g. 10ms -> lots of data blobs that are sent to MediaRecorder.ondataavailable are of size 0. Where we can find some sort of guarantees/specs on the minimal timeslice that we can use, so that the data blobs are meaningful?
In the documentation there are the following:
If timeslice is not undefined, then once a minimum of timeslice milliseconds of data have been collected, or some minimum time slice imposed by the UA, whichever is greater, start gathering data into a new Blob blob, and queue a task, using the DOM manipulation task source, that fires a blob event named dataavailable at recorder with blob.
So, my guess is that it is somehow related to some data blobs being of 0 size. What does it "some minimum time slice imposed by the UA" mean?
PS
Happy to provide code if needed. But the question is not about some specific code. It is to get understanding of the assumptions behind the MediaRecorder API and why they are there.
The timeslice parameter does not allow to create independent media chunks; instead, it gives an opportunity to save data (e.g. on the filesystem, or uploaded to a server) on a regular basis, rather than holding potentially large media content in memory.

How do you sum a statsd counter over a large time range with correct values?

Background
A basic use case for statsd & grafana is learning how many times a function has been called over a time range-- whether that is "last 6h", "since beginning of today", "since beginning of time", etc.
What I'm struggling to find is the correct function to achieve this. I'm using a hosted solution; however, I can confirm that data is being flushed from StatsD to Graphite in 10s intervals.
Current Setup
StatsD Flush: 10s
Graph Function: hitcount(counters.login.employer.count, "10seconds")
Time Range: 24h
Problem
When using hitcount(counters.login.employer.count, "10seconds"), the data returned is incorrect. In fact, I can do 24h, 23h, 22h, and note the values are actually increasing.
I've performed all testing here in a controlled environment, only my machine is sending metrics to StatsD. This is not yet in production code.
Any idea what could be going on here?
The way counters work is that on each interval the value of the counter is sent to graphite and reset in statsd, so what you're looking for is the sum of the series.
You can do that using consolidateBy('sum') combined with maxDataPoints=1.
Be aware that if your series is being aggregated in graphite you'll need to make sure that the aggregation is by sum, otherwise when values get rolled up from the individual values reported by statsd into aggregated buckets they'll be averaged, and your sum won't work across longer intervals. You can read more about configuring aggregation in Graphite here.

Is it possible to use the TWS/IBpy interface to collect and analyze tick data?

While searching for a template to test a paper trading strategy, I stumbled on IBPy. I have gone through the initial set-up and can connect and receive updates from the server. What I would like to do is:
a) Gather ticks from 1..n symbols when new prices (bid/asks) are published
b) Store these temporarily in a vector (I guess with vector.append((bid,ask))
c) Once the vector reaches it's computational max (I need 30 seconds or a certain number of ticks) I will compute some valued on vector[] and decide on whether an entry is appropriate
d) If not pop(0) and keep collecting
e) exit on a stoploss or trailing profit
My questions are:
i) I have read that updates are 250 ms, that is fine for my analytics but can the program/system keep up because different symbols update at different times so just because symbolA updates every 250 ms, with 10 symbols the updates maybe very frequent
ii) When I stop to make a calculation, haven't I lost updates?
If there is skeleton code for this, it would be great to mess around with it
Thanks for listening!
If you need to handle 100s of stock symbols you shall have multiple (at least 2) threads. One thread pulls the incoming data from the socket, sorts the messages by message type and pushes the data to queues. Other threads are waiting for their respective queues to get some data and process the incoming data.
The idea is that the dispatcher thread ensures that all incoming data gets pulled from the socket as fast as possible.
Generally your PC will be able to handle anything IB will be willing to send you. If your processing does not take too much time - no locks, calls to sleep(), file operations - you can do everything in a single thread.

Making Carbon in Graphite accept all data, no matter what

The Carbon listener in Graphite has been designed and tuned to make it somewhat predictable in its load on your server, to avoid flooding the server itself with IO wait or skyrocketing the system load overall. It will drop incoming data if necessary, putting server load as the priority. After all, for the typical data being stored, it's no big deal.
I appreciate all that. However, I am trying to prime a large backlog of data into graphite, from a different source, instead of pumping in live data as it happens. I have a reliable data source from a third party that comes to me in bulk, once/day.
So in this case, I don't want any data values dropped on the floor. I don't really care how long the data import takes. I just want to disable all the safety mechanisms, let carbon do its thing, and know ALL my data has made it in.
I'm searching the docs and finding all kinds of advice on tuning the parameters of carbon_cache in carbon.conf, but I can't find this. It is starting to sound more like art than science. Any help appreciated.
First thing of course is to receive data through tcp listener (line receiver) instead of udp to avoid loosing incoming points.
There are several settings in graphite that throttle part of the pipeline, though it is not always clear of what graphite does when threshold are reached. You'll have to test and/or read the carbon code.
You'll probably want to tune:
MAX_UPDATES_PER_SECOND = 500 (max number of disk updates in a second)
MAX_CREATES_PER_MINUTE = 50 (max number of metric creation per minute)
For the cache, USE_FLOW_CONTROL = True and MAX_CACHE_SIZE = inf (inf is a good value so revert to this if you changed it)
If you use a relay and/or aggregator, MAX_QUEUE_SIZE = 10000 and USE_FLOW_CONTROL = True are important.
I set this property to "inf":
MAX_CREATES_PER_MINUTE = inf
and make sure that this is infinite too:
MAX_CACHE_SIZE = inf
During the bulk load, I monitor /opt/graphite/storage/log/carbon-cache/carbon-cache-a/creates.log to make sure that the whisper DBs are being created.
To make sure, you can run the load a second time and there should be no further creations.

Resources