AWS DynamoDB matrix doesn't work correctly - amazon-dynamodb

I am using dynamoDB and curious of this
When I see table matrix with period of 10, 30 seconds, it seems that it exceeds provisioned value
Then, when I see table matrix with period of 1 minutes, it doesn't reach provisioned value at all
I want to know why this happens.

This isn't a DynamoDB thing. It's a CloudWatch rendering thing.
Consumed capacity has a natural period of 1 minute. When you set your graph period to 1 minute everything is correct. Consumed is below provisioned.
If you change the graph period to 30 seconds, your consumed view adjusts and you see consumption that's double what's real. The math behind the scenes divides by the wrong period. Graph period of 10 seconds, you get 6x reality. Graph period of 5 minutes, you get 1/5th reality.
The Provisioned line isn't based on an equation involving Period so it's not affected by the chosen period.
Maybe someone can comment on why the user is allowed to control the Period of the view but it just messes things up when it doesn't match the natural period of the data.

Related

DynamoDB ConsumedWriteCapacityUnits vs Consumed CloudWatch Metrics for 1 second Period

I am confused by this live chart, ConsumedWriteCapacityUnits is exceeding the provisioned units, while "consumed" is way below. Do I have a real problem or not?
This seems to only show for the
One Second Period
One Minute Period
Your period is wrong for the metrics. DynamoDB emits metrics at the following periods:
ConsumedCapacity: 1 min
ProvisionedCapacity: 5 min
For ConsumedCapacity you should divide the metric by the period but only at a minimum of 1min.
Exceeding provisioned capacity for short periods of time is fine, as burst capacity will allow you to do so. But if you exceed it for long periods it will lead to throttling.

Graphite, datapoints disappear if I choose a wider time range

If I ask for this data:
https://graphite.it.daliaresearch.com/render?from=-2hours&until=now&target=my.key&format=json
I get, among other datapoints, this one:
[
2867588,
1398790800
]
If I ask for this data:
https://graphite.it.daliaresearch.com/render?from=-10hours&until=now&target=my.key&format=json
The datapoint looks like this:
[
null,
1398790800
]
Why this datapoint is being nullified when I choose a wider time range?
Update
I'm seeing that for a chosen date range smaller than 7 hours the resolution of the datapoints are every 10 seconds and when the date range chosen is 7 hours or bigger the the resolution goes to one datapoint every 1 minute.. and continue this diretion as the date range chosen is getting bigger to one datapoint every 10 minutes and so.
So when the resolution of the datapoints is every 10 seconds the data is there, when the resolution is every 1 minute or more, then the datapoint has not the value :/
I'm sending a data point every 1 hour, maybe it is a conflict with the resolutions configuration and me sending only one datapoint per hour
There are several things happening here, but basically the problem is that you have misconfigured graphite (or at least, configured it in a way that makes it do things that you aren't expecting!)
Specifically, you should set xFilesFactor = 0.0 in your storage-aggregation.conf file. Since you are new at this, you probably just want this (mine is in /opt/graphite/conf/storage-aggregation.conf):
[default]
pattern = .*
xFilesFactor = 0.0
aggregationMethod = average
The graphite docs describe xFilesFactor like this:
xFilesFactor should be a floating point number between 0 and 1, and specifies what fraction of the previous retention level’s slots must have non-null values in order to aggregate to a non-null value. The default is 0.5.
But wait! This wont change existing statistics! These aggregation settings are set once per metric at the time the metric is created. Since you are new at this, the easy way out is to just go to your whisper directory and delete the prior data and start over:
cd /opt/graphite/storage/whisper/my/
rm key.wsp
your root whisper directory may be different depending on platform, etc. After removing the data files graphite should recreate them automatically upon the next metric write and they should get your updated settings (dont forget to restart carbon-cache after changing your storage-aggregation settings).
Alternatively, if you need to keep your old data you will need to run whisper-resize.py against your whisper (.wsp) data files with --xFilesFactor=0.0 and also likely all of your retention settings from storage-schemas.conf (also viewable with whisper-info.py)
Finally, I should add that the reason you get non-null data in your first query, but null data in your second is because graphite will try to pick the best available retention period from which to serve your request based on the time window you requested. For the smaller window, graphite is deciding that it can serve your request using the highest precision data (i.e., non aggregated) and so you are seeing your raw metrics. For the longer time window, graphite is finding that the high precision, non-aggregated data is not available for the entire window -- these periods are configured in storage-schemas.conf -- so it skips to the next highest-precision data set available (i.e. first aggregation tier) and returns only aggregated data. Because your aggregation config is writing null data, you are therefore seeing null metrics! So fix the aggregation, and you should fix the null data problem. But remember that graphite never combines aggregation tiers in a single request/response, so anytime you see differences between results from the same query when all you are changing is the from / to params, the problem is pretty much always due to aggregation configs.
I'm not quite sure about your specific situation, but I think I can give you some general pointers.
First off, you are right about the changing resolution depending on the time range. This is configured in storage-schemas.conf and is done to save space when storing data over large periods of time. An example could be: 15s:7d,1m:21d,15m:5y, meaning 15 seconds resolution for 7 days, then 1 minute resolution for 21 days, then 15min for 5 years.
Then there is the way Graphite does the actual aggregation from one resolution to the other. This is configured in: storage-aggregation.conf. The default settings are: xFilesFactor=0.5 and aggregationMethod=average. The xFilesFactor setting is saying that a minimum of 50% of the slots in the previous retention level must have values for next retention level to contain an aggregate. The aggregationMethod is saying that all the values of the slots in the previous retention level will be combined by averaging. My guess is that your stat doesn't have enough data points to fulfill the 50% requirement, resulting in a null value.
For more information, check out the docs, they are pretty complete: http://graphite.readthedocs.org/en/latest/config-carbon.html

Graphite: show change from previous value

I am sending Graphite the time spent in Garbage Collection (getting this from jvm via jmx). This is a counter that increases. Is their a way to have Graphite graph the change every minute so I can see a graph that shows time spent in GC by minute?
You should be able to turn the counter into a hit-rate with the Derivative function, then use the summarize function to the counter into the time frame that your after.
&target=summarize(derivative(java.gc_time), "1min") # time spent per minute
derivative(seriesList)
This is the opposite of the integral function. This is useful for taking a
running totalmetric and showing how many requests per minute were handled.
&target=derivative(company.server.application01.ifconfig.TXPackets)
Each time you run ifconfig, the RX and TXPackets are higher (assuming there is network traffic.)
By applying the derivative function, you can get an idea of the packets per minute sent or received, even though you’re only recording the total.
summarize(seriesList, intervalString, func='sum', alignToFrom=False)
Summarize the data into interval buckets of a certain size.
By default, the contents of each interval bucket are summed together.
This is useful for counters where each increment represents a discrete event and
retrieving a “per X” value requires summing all the events in that interval.
Source: http://graphite.readthedocs.org/en/0.9.10/functions.html

How to intrepret values of the ASP.NET Requests/sec performance counter?

If I monitor ASP.NET Requests/sec performance counter every N seconds, how should I interpret the values? Is it number of requests processed during sample interval divided by N? or is it current requests/sec regardless the sample interval?
It is dependent upon your N number of seconds value. Which coincidentally is also why the performance counter will always read 0 on its first nextValue() after being initialized. It makes all of its calculations relatively from the last point at which you called nextValue().
You may also want to beware of calling your counters nextValue() function at intervals less than a second as it can tend to produce some very outlandish outlier results. For my uses, calling the function every 5 seconds provides a good balance between up to date information, and smooth average.

1000 users a day for 10 minutes equal what concurrent load?

Is there a formula that will tell me the max/average # of concurrent users I would expect to have with a population of 1000 users a day using an app for 10 minutes?
1000 users X 10 minutes = 10,000 user minutes
10,000 user minutes / 1440 minutes in a day = 6.944 average # of concurrent users
If you are looking for better estimates of concurrent users, I would suggest putting google analytics on your site. It would give you an accurate reading of highs, lows, and averages for your site.
In the worst case scenario, all 1000 users use the app at the same time, so max # oc concurrent users is 1000.
1000 users * 10 minutes = 10000 total minutes
One day has 24 hours * 60 minutes = 1440 minutes.
Assume normal distribution, you would expect 10000 / 1440 = 6.9 users on average using your app concurrently. However, normal distribution is not a valid assumption since you probably won't expect a lot of users in the middle of a night.
Well, assuming there was a steady arrival pattern, people visited the site with an equal distribution, you would have:
( Users * Visit Length in Minutes) / (Minutes in a Day)
or
(1000 * 10 ) / 1440
Which would be about 7 concurrent users
Of course, you would never have an equal distribution like that. You can take a stab at the anticipated pattern, and distribute the load based on that pattern. Best bet would be to monitor the traffic for a bit, with a decent sampling of user traffic.
That depends quite a bit on their usage pattern and no generic formula can cover it.
As an extreme example, if this is a timecard application, you would have a large peak at the start and stop of each work day, with scattered access between start and stop as people work on different projects, and almost no access outside of working hours.
Can you describe the usage pattern you expect?
Not accurately.
Will the usage be spread evenly over the day or are there events that will cause everyone to use their 10 mintes at the same time?
For example lets compare a general purpose web site with a time card entry system.
General purpose web site will have ebbs and flows throughout the day with clusters of time when you have lots of users (during work hours...).
The time card entry system might have all 1000 people hitting the system within a 15 minute span of time twice a day.
Simple math can show you an average, but people don't generally behave "on average"...
As is universally said, the answer is dependent upon the distribution of when people "arrive." If it's equally likely for someone to arrive at 3:23 AM as at 9:01 AM, max_concurrent is low; if everyone checks in between 8:55 AM and 9:30 AM, max_concurrent is high (and if response time slows depending on current load, so that the 'average' time on the site goes up significantly when there are lots of people on the site...).
A model is only as good as its inputs, but having said that, if you have a good sense of usage patterns, a Monte Carlo model might be a good idea. (If you have access to a person with a background in statistics or probability, they can do the math just based on the distribution parameters, but a Monte Carlo model is easier for most people to create and manipulate.)
In comments, you say that it's a "self-serve reference app similar to Wikipedia," but your relatively low usage means that you cannot rely on the power of large numbers to "dampen out" your arrival curves over 24-hours.

Resources