Moving average divided by moving average - r

I know that we can use inbuild function “Moving Average” to calculate rolling average with specific intervals like 3 months, 12 months, etc.. Is it possible to divide two moving average values to get the “Per system value”.
For example:
Moving average 1: Total number of Hrs
Moving average 2: Total number of systems
Per System = Total number of Hrs/ Total number of systems
Appreciate your help and suggestions.

Found a solution.
Create a simple calculated column using "OVER" & "Intersect" function to calculate the Moving Average .
For example: SUM([Total number of Hrs) OVER (Intersect(LastPeriods(12,[Yr_Mn]),[XXX])) / 12"
Also use the same logic to calculate moving average for Total number of systems.
In the visualization, using "First" function, exclude the duplicates.
For example:
First([Total number of Hrs]) / First([Total number of systems] as [Per System Value]

Related

Count total values after CountDistinct

i created a table in which i want to see all the resources that were used on 1 day, for different missions. It's possible that a resource executed more than 1 mission / day. that's why i used an expression with CountDistinct to only show the unique number of resources, used in 1 day for all the missions.
Now as a next step , i want to see what the average number of unique resources is, for a selected time period.
Unfortunately i am not able to use a count or sum expression on the CountDistinct-expression.
If i execute a sum function it gives me the total number of unique values, spread accross the time period, but i want to make a sum of the resources used per day.
fex i have 3 resources , on day 1 i use resource A for 5 missions on day 2 i use resource A & B for 6 missions. so that makes 11 missions on 2 days, and 3 resources ( A + A + B ).
so i want to count the 82+92+100+90+91+92. How do i get the sum of these values ?
any suggestions on how to fix this please?
MANY THANKS!!!!!
Found the solution, created 2 extra datasets to pull the unique values / day.
Added a lookup function in one of the two tablix to compare the values on the same dates ( dates in both datesets ) = > unique values per day. Afterwards made the sum of the values and divided by number of days to get get average unique values / per day.

to give a column's data equal values divide by the Average (Mean) numbers in the series in R

An easy function (for everyone) to give equal Values divide by the Average (Mean) instead of writing it out all the time every time I change the amount. In this example I have 5. But I might want more or less. Thanks. All add up to 1. I know I didn't explain it properly, but I hope you understand
Amount <- c(Coat=1/5,Boat=1/5,Shop=1/5,Car=1/5,Bike=1/5)
Lets say I have 5 series in a Column. I want the values in the column to be all equal (1/5). I want the Column to add up to 1.
sum(Amount) = 1
I don't know whether I understood correctly, but I think you want to transform a vector to retain the proportions but have a sum of exactly 1. The way to do this would be to divide the vector by its sum:
> amounts = c("car"=2,"bike"=2,"ship"=1)
> amounts = amounts/sum(amounts)
> amounts
car bike ship
0.4 0.4 0.2
Is this what you were looking for?

Tableau - Average of Ranking based on Average

For a certain data range, for a specific dimension, I need to calculate the average value of a daily rank based on the average value.
First of all this is the starting point:
This is quite simple and for each day and category I get the AVG(value) and the Ranke based on that AVG(Value) computed using Category.
Now what I need is "just" a table with one row for each Category with the average value of that rank for the overall period.
Something like this:
Category Global Rank
A (blue) 1,6 (1+3+1+1+1+3)/6
B (orange) 2,3 (3+2+3+2+2+2)/6
C (red) 2,0 (2+1+2+3+3+1)/6
I tried using the LOD but it's not possble using rank table calculation inside them so I'm wondering if I'm missing anything or if it's even possible in Tableau.
Please find attached the twbx with the raw data here:
Any Help would be appreciated.

Moving average with dynamic window

I'm trying to add a new column to my data table that contains the average of some of the following rows. How many rows to be selected for the average however depends on the time stamp of the rows.
Here is some test data:
DT<-data.table(Weekstart=c(1,2,2,3,3,4,5,5,6,6,7,7,8,8,9,9),Art=c("a","b","a","b","a","a","a","b","b","a","b","a","b","a","b","a"),Demand=c(1:16))
I want to add a column with the mean of all demands, which occured in the weeks ("Weekstart") up to three weeks before the respective week (grouped by Art, excluding the actual week).
With rollapply from zoo-library, it works like this:
setorder(DT,-Weekstart)
DT[,RollMean:=rollapply(Demand,width=list(1:3),partial=TRUE,FUN=mean,align="left",fill=NA),.(Art)]
The problem however is, some data is missing. In the example, the data for the Art b lack the week no 4, there is no Demand in week 4. As I want the average of the three prior weeks, not the three prior rows, the average is wrong. Instead, the result for Art b for week 6 should look like this:
DT[Art=="b"&Weekstart==6,RollMean:=6]
(6 instead of 14/3, because only Week 5 and Week 3 count: (8+4)/2)
Here is what I tired so far:
It would be possible to loop through the minima of the week of the following rows in order to create a vector that defines for each row, how wide the 'width' should be (the new column 'rollwidth'):
i<-3
DT[,rollwidth:=Weekstart-rollapply(Weekstart,width=list(1:3),partial=TRUE,FUN=min,align="left",fill=1),.(Art)]
while (max(DT[,Weekstart-rollapply(Weekstart,width=list(1:i),partial=TRUE,FUN=min,align="left",fill=NA),.(Art)][,V1],na.rm=TRUE)>3) {
i<-i-1
DT[rollwidth>3,rollwidth:=i]
}
But that seems very unprofessional (excuse my poor skills). And, unfortunately, the rollapply with width and rollwidth doesnt work as intended (produces warnings as 'rollwidth' is considered as all the rollwidths in the table):
DT[,RollMean2:=rollapply(Demand,width=list(1:rollwidth),partial=TRUE,FUN=mean,align="left",fill=NA),.(Art)]
What does work is
DT[,RollMean3:=rollapply(Demand,width=rollwidth,partial=TRUE,FUN=mean,align="left",fill=NA),.(Art)]
but then again, the average includes the actual week (not what I want).
Does anybody know how to apply a criterion (i.e. the difference in the weeks shall be <= 3) instead of a number of rows to the argument width?
Any suggestions are appreciated!

How to calculate lines per order in a cube

I cant think this one through, any help would be appreciated. I have a measure of the count of number of sales (distinct count of transaction numbers). Now what I'm looking for is the average number of lines per order.
How can I calculate that in Analysis Server 2008 R2?

Resources