workfront-api - Custom View - workfront-api

I'm setting up a dashboard in Workfront. I want to create a custom view that I'm calling "Est Variance" which, at the task level, will compare a tasks planned hours to complete (workRequired) with actual hours to complete (actualWorkRequired). In other words, we planned for 10 hours but it took 15, so the value displayed should be +50%.
The calculation is Planned Hours (minus) Actual Hours (divided by) Planned Hours. I came up with the following code for the view:
displayname=Est Variance
linkedname=direct
namekey=Est Variance
querysort=actualWork
shortview=true
textmode=true
valueexpression=ROUND(SUB({actualWorkRequired},{workRequired}))/({workRequired})*100
valuefield=actualWorkRequired
valueformat=compound
viewalias=actualworkrequired
... which returns the correct value, but I'm trying to make the following changes:
CONCAT a "%" after the value
Round to the nearest whole number
Add rules that would display any positive value in red, and any negative value in green.
For tasks returning "0" (planned hours = actual hours), display nothing.

1.) CONCAT a "%" after the value
2.) Round to the nearest whole number
Setting the valueformat=doubleAsPercentRounded will accomplish both, so simplify the valueexpression to be
valueexpression=SUB({actualWorkRequired},{workRequired})/{workRequired}
3.) Add rules that would display any positive value in red, and any negative value in green.
You can use the conditional formatting to color the results depending on their value.
i.e.
styledef.case.0.comparison.icon=false // show the value instead of the icon
styledef.case.0.comparison.leftmethod=Est Variance // column name
styledef.case.0.comparison.lefttext=Est Variance // column name
styledef.case.0.comparison.operator=lt // less than operator
styledef.case.0.comparison.operatortype=double // data type
styledef.case.0.comparison.righttext=0 // target value
styledef.case.0.comparison.trueproperty.0.name=textcolor // tranform on true
styledef.case.0.comparison.trueproperty.0.value=03a219 // green
styledef.case.0.comparison.truetext= // ignore
4.) For tasks returning "0" (planned hours = actual hours), display nothing.
Finally a simple IF statement in the valueexpression can make the value be an empty string when the result is 0
IF(condition, trueStatement, falseStatement)
valueexpression=IF({actualWorkRequired} = {workRequired}, "", SUB({actualWorkRequired},{workRequired})/{workRequired}
Good luck!

Related

Remove total value for one column in PowerBI

I have a table visualisation in PowerBI that sums the top 10 products sold by sales quantity. I have a calculated column which shows the rate of sale, using other fields from the data source:
(quantity / # stores with product) / weeks on sale
The ROS calculates correctly, but it still sums and appears in the total row.. The number of stores and number of weeks are set to 'Don't Summarize', but they still add together and give some meaningless number in the total row. If i set ROS to 'Don't Summarize', to remove the total row, the summing of the rest of the table and therefore the filter I have on top N by quantity drops out.
It is very frustrating! Is there an option somewhere to simply not display total for a field?? I don't want to remove the total row completely as the other fields (e.g. Qty, Value, Margin) are useful to see a sum of.. It seems very strange that it is so difficult to do something so minor..
Additional info:
Qty is a SUM field.
Stores is not summarized and simply refers to the average number of stores that stock that product over the weeks of the trading season
Weeks is not summarized.
Weeks is not summarized and refers to the weeks that have passed in the trading season.
Example data:
Item.......Qty......Stores.....Weeks....ROS
Itm1........600........390.........2............0.77
Itm2........444........461.........2............0.48
Itm3........348........440.........2............0.40
Total.....1,392.....1,291*......6*...........1.65*
Fields marked with a * are those where the sum is a meaningless figure unrelated to the data. I do not actually need Stores and Weeks to show in the table, so the fact that they sum does not matter. However, ROS is essential, but the sum part is totally irrelevant and I do not want it to show. Any ideas? I am open to the idea of using R to overcome the lack of flexibility in the standard tables although my knowledge in this area is fairly limited.
I suspect you've made a common mistake - using a Calculated Column for ROS where you should've used a Measure.
If you rebuild that calculation as a Measure, then you can wrap the HASONEVALUE function around it, with the objective of showing a blank when there are multiple Item values in context (the Total row).
Roughly the Measure formula would be:
ROS = IF ( HASONEVALUE ( Mytable[Item] ) , << calculation >> , BLANK() )
I would also replace your use of / with the DIVIDE function, to avoid divide by zero errors.
You can remove individual totals for columns in tables and matrix objects in a round-about way by using field formatting.
Click the object, go to formatting, click the field formatting accordion, select the column or columns you want to affect from the drop-down list, set the font color to white, set 'apply to values' to off, and set 'apply to totals' to on.
A bit tedious if you have many columns, but you will have, in affect, whited-out the column totals.
Heads up, you might still have a problem with exporting data, though.
Cheers
Click on the table -> Fields -> expand the value field you don't want to include -> Select "Don't Summarize." This will exclude it from the "Total" row.
select do not summarise option for those metrics which you dont want total
Select the table you want to change
In the Visualizations pane:
Go to Format,
Find the Field Formatting option,
Choose the field you don't want to summarize.
Turn off 'apply to header',
Turn off 'apply to values',
Turn ON 'apply to total',
Change the font color to white.

Grafana: Calculating single value growth percentage from sum(series)

Given a set of series aaa.bbb.*.ccc. I need to sum up all series, and show a single value - the percentage increase of LAST value compared to the last available value 1 month ago. In other words: (last_now - last_1m_ago) / last_1m_ago. I know I can "Override relative time" with "1M" value, but single number calculation escapes me. Thanks!
You'll want something like:
asPercent(sumSeries(aaa.bbb.*.ccc), timeShift(sumSeries(aaa.bbb.*.ccc), "1mon"))

Matrix Factorization and Gradient Descent

I am following the paper found here and am trying to do Batch Gradient Descent (BGD) instead of the Stochastic Gradient Descent (SGD) as described in the paper.
For SGD what I gather is you do this (pseudocode):
for each user's actual rating {
1. calculate the difference between the actual rating
and the rating calculated from the dot product
of the two factor matrices (user vector and item vector).
2. multiply answer from 1. by the item vector
corresponding to that rating.
3. alter the initial user vector by the figure
calculated in 2. x by lambda e.g.:
userVector = userVector + lambda x answer from 2.
}
Repeat for every user
Do the same for every Item, except in 2. multiply by the user vector instead of the item vector
Go back to start and repeat until some breakpoint
For BGD what I did was:
for each user {
1. sum up all their prediction errors e.g.
real rating - (user vector . item vector) x item vector
2. alter the user vector by the figure calculated in 1. x by lambda.
}
Then repeat for the Items exchanging item vector in 2. for user vector
This seems to make sense, but on further reading, I have become confused about BGD. It says that BGD must iterate throughout the entire dataset just to make 1 change. Does this mean like what I have done, the entire dataset relative to that particular user, or does it literally mean the entire dataset?
I made an implementation that goes through the entire dataset, summing every single prediction error and then using that figure to update every single user vector (so all user vectors are updated by the same amount!). However, it does not approach a minimum and fluctuates rapidly, even with a lambda rate of 0.002. It can go from an average error of 12'500 to 1.2, then to -539 etc. Eventually, the number approaches infinity and my program fails.
Any help on the mathematics behind this would be great.

how to do a sum in graphite but exclude cases where not all data is present

We have 4 data series and once in a while one of the 4 has a null as we missed reading the data point. This makes the graph look like we have awful spikes in loss of volume coming in which is not true as we were just missing the data point.
I am doing a basic sumSeries(server*.InboundCount) right now for server 1, 2, 3, 4 where the * is.
Is there a way where graphite can NOT sum the locations on the line and just have sum for those points in time be also null so it connects the line from the point where there is data to the next point where there is data.
NOTE: We also display the graphs server*.InboundCount individually to watch for spikes on individual servers.
or perhaps there is function such that it looks at all the series and if any of the values is null, it returns null for every series that it takes X series and returns X series points to the sum function as null+null+null+null hopefully doesn't result in a spike and shows null.
thanks,
Dean
This is an old question but still deserves an answer as a point of reference, what you're after I believe is the function KeepLastValue
Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over. Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line.
This would make your function
sumSeries(keepLastValue(server*.InboundCount))
This will work ok if you have a single null datapoint here and there. If you have multiple consecutive null data points you can specify how far back before a null breaks your data. For example, the following will look back up to 10 values before the sumSeries breaks:
sumSeries(keepLastValue(server*.InboundCount, 10))
I'm sure you've since solved your problems, but I hope this helps someone.

Graphite does not graph values correctly when using long durations?

I'm trying to graph data using statsd and graphite. I have a simple counter, I increment it by 1, and then when I graph the values for the counter over the day, I see strange values like 0.09 as the peak in my graph (see http://i.stack.imgur.com/o4gmz.png)
This graph should be showing 2 logins, but instead it's showing 0.09. If I change the time scale from 1 day to the last 15 minutes, then it correctly shows the two logins (see http://i.stack.imgur.com/23vDJ.png)
I've set up my finest retention to be in 10s increments in storage-schemas.conf:
retentions = 10s:7d,1m:21d,24h:5y
I've set up my storage-aggregation.conf file to sum counts:
[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
(And, before you ask, yes; this is a .count).
If I try my URL with &rawData=true then in either case I see some Nones, some 0.0s, and a pair of 1.0s separated by some 0.0s. I never see these fractional values that somehow show up on the graph. So... Is this a bug? Am I doing something wrong?
There's also consolidateBy function which tells graphite what to do if there's no enough pixels to draw everything accurately. By default it's using "avg" function and therefore strange results when time ranges are greater. Here excerpt from documentation:
When a graph is drawn where width of the graph size in pixels is
smaller than the number of datapoints to be graphed, Graphite
consolidates the values to to prevent line overlap. The
consolidateBy() function changes the consolidation function from the
default of ‘average’ to one of ‘sum’, ‘max’, or ‘min’. This is
especially useful in sales graphs, where fractional values make no
sense and a ‘sum’ of consolidated values is appropriate.
Another function that could be useful is hitcount. Short excerpt from here why it's useful:
This function is like summarize(), except that it compensates
automatically for different time scales (so that a similar graph
results from using either fine-grained or coarse-grained records) and
handles rarely-occurring events gracefully.
I spent some time scratching my head why I get fractions for my counter with time ranges longer than couple hours when my aggregation rule is max. It's pretty confusing, especially at the beginning when you play with single counters to see if everything works. Checking rawData is quite a good way for debugging sanity check ;)

Resources