Related
Hello everyone, I need your help for the calculation of this expression. I have a dataframe with income streams (made of 5 "t" periods) from different years. What I need is a command to make R understand the highlighted part of the formula under the summation symbol. I need R to multiplicate when there is a loss for the l0 coefficient, and when there is a gain for the g0 gain coefficient.
delta=15/16
g0=16/15
l0=1
2004 2006 2008 2010 2012
1 5 10 12 14 8
2 13 5 4 3 1
3 4 2 1 8 10
so if this is the dataframe, for obs number 1 I need to calculate this way:
[(10-5)*15/16^(4-1)+(12-10)*15/16^(3-1)+(14-12)*15/16^(2-1)]*16/15+[(8-14)*15/16^(1-1)]*1
for obs number 2 this way:
[(5-13)*15/16^(4-1)+(4-5)*15/16^(3-1)+(3-4)*15/16^(2-1)+(1-3)*15/16^(1-1)]*1
for obs 3 this way:
[(2-4)*15/16^(4-1)+(1-2)*15/16^(3-1)]*1 +[(8-1)*15/16^(2-1)+(10-8)*15/16^(1-1)]*16/15
Assuming your data.frame is generated as below
x1 <- c(5,10,12,14,8)
x2 <- c(13,5,4,3,1)
x3 <- c(4,2,1,8,10)
df <- as.data.frame(rbind(x1,x2,x3))
rownames(df) <- as.character(c(1,2,3))
colnames(df) <- as.character(c(2004,2006,2008,2010,2012))
then you can use the following for you purpose:
I <- function(x, delta = 15/16, g0 = 16/15, l0 = 1) {
dx <- diff(x)
sum(sapply(seq_along(dx), function(k) dx[k]*delta**(length(dx)-k)*ifelse(dx[k]>0, g0,l0)))
}
r <- apply(df, 1, I)
where I is the function as you stated
I would like to create a rolling 2 quarter average for alpha, bravo and charlie (and lots of other variables. Research is taking me to zoo and lubricate packages but seem to always go back to rolling within one variable or grouping
set.seed(123)
dates <- c("Q4'15", "Q1'16", "Q2'16","Q3'16", "Q4'16", "Q1'17", "Q2'17" ,"Q3'17", "Q4'17","Q1'18")
df <- data.frame(dates = sample(dates, 100, replace = TRUE, prob=rep(c(.03,.07,.03,.08, .05),2)),
alpha = rnorm(100, 5), bravo = rnorm(100, 10), charlie = rnorm(100, 15))
I'm looking for something like
x <- df %>% mutate_if(is.numeric, funs(rollmean(., 2, align='right', fill=NA)))
Desired result: a weighted average across "Q4'15" & "Q1'16", "Q1'16" & "Q2'16", etc for each column of data (alpha, bravo, charlie). Not looking for the average of the paired quarterly averages.
Here is what the averages would be for the Q4'15&"Q1'16" time point
df %>% filter(dates %in% c("Q4'15", "Q1'16")) %>% select(-dates) %>% summarise_all(mean)
I like data.table for this, and I have a solution for you but there may be a more elegant one. Here is what I have:
Data
Now as data.table:
R> suppressMessages(library(data.table))
R> set.seed(123)
R> datesvec <- c("Q4'15", "Q1'16", "Q2'16","Q3'16", "Q4'16",
+ "Q1'17", "Q2'17" ,"Q3'17", "Q4'17","Q1'18")
R> df <- data.table(dates = sample(dates, 100, replace = TRUE,
+ prob=rep(c(.03,.07,.03,.08, .05),2)),
+ alpha = rnorm(100, 5),
+ bravo = rnorm(100, 10),
+ charlie = rnorm(100, 15))
R> df[ , ind := which(datesvec==dates), by=dates]
R> setkey(df, ind) # optional but may as well
R> head(df)
dates alpha bravo charlie ind
1: Q4'15 5.37964 11.05271 14.4789 1
2: Q4'15 7.05008 10.36896 15.0892 1
3: Q4'15 4.29080 12.12845 13.6047 1
4: Q4'15 5.00576 8.93667 13.3325 1
5: Q4'15 3.53936 9.81707 13.6360 1
6: Q1'16 3.45125 10.56299 16.0808 2
R>
The key here is that we need to restore / maintain the temporal ordering of your quarters which your data representation does not have.
Average by quarter
This is easy with data.table:
R> ndf <- df[ ,
+ .(qtr=head(dates,1), # label of quarter
+ sa=sum(alpha), # sum of a in quarter
+ sb=sum(bravo), # sum of b in quarter
+ sc=sum(charlie), # sum of c in quarter
+ n=.N), # number of observations
+ by=ind]
R> ndf
ind qtr sa sb sc n
1: 1 Q4'15 25.2656 52.3039 70.1413 5
2: 2 Q1'16 65.8562 132.6650 192.7921 13
3: 3 Q2'16 10.3422 17.8061 31.3404 2
4: 4 Q3'16 84.6664 168.1914 256.9010 17
5: 5 Q4'16 41.3268 87.8253 139.5873 9
6: 6 Q1'17 42.6196 85.4059 134.8205 9
7: 7 Q2'17 76.5190 162.0784 241.2597 16
8: 8 Q3'17 42.8254 83.2483 127.2600 8
9: 9 Q4'17 68.1357 133.5794 198.1920 13
10: 10 Q1'18 37.0685 78.4107 120.2808 8
R>
Lag those averages once
R> ndf[, `:=`(psa=shift(sa), # previous sum of a
+ psb=shift(sb), # previous sum of b
+ psc=shift(sc), # previous sum of c
+ pn=shift(n))] # previous nb of obs
R> ndf
ind qtr sa sb sc n psa psb psc pn
1: 1 Q4'15 25.2656 52.3039 70.1413 5 NA NA NA NA
2: 2 Q1'16 65.8562 132.6650 192.7921 13 25.2656 52.3039 70.1413 5
3: 3 Q2'16 10.3422 17.8061 31.3404 2 65.8562 132.6650 192.7921 13
4: 4 Q3'16 84.6664 168.1914 256.9010 17 10.3422 17.8061 31.3404 2
5: 5 Q4'16 41.3268 87.8253 139.5873 9 84.6664 168.1914 256.9010 17
6: 6 Q1'17 42.6196 85.4059 134.8205 9 41.3268 87.8253 139.5873 9
7: 7 Q2'17 76.5190 162.0784 241.2597 16 42.6196 85.4059 134.8205 9
8: 8 Q3'17 42.8254 83.2483 127.2600 8 76.5190 162.0784 241.2597 16
9: 9 Q4'17 68.1357 133.5794 198.1920 13 42.8254 83.2483 127.2600 8
10: 10 Q1'18 37.0685 78.4107 120.2808 8 68.1357 133.5794 198.1920 13
R>
Average over current and previous quarter
R> ndf[is.finite(psa), # where we have valid data
+ `:=`(ra=(sa+psa)/(n+pn), # total sum / total n == avg
+ rb=(sb+psb)/(n+pn),
+ rc=(sc+psc)/(n+pn))]
R> ndf[,c(1:2, 11:13)]
ind qtr ra rb rc
1: 1 Q4'15 NA NA NA
2: 2 Q1'16 5.06233 10.27605 14.6074
3: 3 Q2'16 5.07989 10.03141 14.9422
4: 4 Q3'16 5.00045 9.78935 15.1706
5: 5 Q4'16 4.84589 9.84680 15.2496
6: 6 Q1'17 4.66369 9.62395 15.2449
7: 7 Q2'17 4.76554 9.89937 15.0432
8: 8 Q3'17 4.97268 10.22195 15.3550
9: 9 Q4'17 5.28386 10.32513 15.4977
10: 10 Q1'18 5.00972 10.09476 15.1654
R>
taking advantage of the fact that the total sum over two quarters divided by the total number of observations is the same as the mean of those two quarters. (And this reflects an edit following an earlier thinko of mine).
Spot check
We can use the selection feature of data.table to compute two of those rows by hand, I am picked those for indices <1,2> and <4,5> here:
R> df[ ind <= 2, .(a=mean(alpha), b=mean(bravo), c=mean(charlie))]
a b c
1: 5.06233 10.276 14.6074
R> df[ ind == 4 | ind == 5, .(a=mean(alpha), b=mean(bravo), c=mean(charlie))]
a b c
1: 4.84589 9.8468 15.2496
R>
This pans out fine, and the approach should scale easily to millions of rows thanks to data.table.
PS: All in One
As you mentioned pipes etc, you can write all this with chained data.table operations. Not my preferred style, but possible. The following creates the exact same out without ever creating an ndf temporary as above:
## All in one
df[ , ind := which(datesvec==dates), by=dates][
,
.(qtr=head(dates,1), # label of quarter
sa=sum(alpha), # sum of a in quarter
sb=sum(bravo), # sum of b in quarter
sc=sum(charlie), # sum of c in quarter
n=.N), # number of observations
by=ind][
,
`:=`(psa=shift(sa), # previous sum of a
psb=shift(sb), # previous sum of b
psc=shift(sc), # previous sum of c
pn=shift(n))][
is.finite(psa), # where we have valid data
`:=`(ra=(sa+psa)/(n+pn), # total sum / total n == avg
rb=(sb+psb)/(n+pn),
rc=(sc+psc)/(n+pn))][
,c(1:2, 11:13)][]
I'm trying to create a function that sums the closest n values to a given date. So if I had 5 weeks of data, and n=2, the value on week 1 would be the sum of weeks 2&3, the value on week 2 would be the sum of weeks 1&3, etc. Example:
library(dplyr)
library(data.table)
Week <- 1:5
Sales <- c(1, 3, 5, 7, 9)
frame <- data.table(Week, Sales)
frame
Week Sales Recent
1: 1 1 8
2: 2 3 6
3: 3 5 10
4: 4 7 14
5: 5 9 12
I want to make a function that does this for me with an input for most recent n (not just 2), but for now I want to get 2 right. Here's my function using lag/lead:
RecentSum = function(Variable, Lags){
Sum = 0
for(i in 1:(Lags/2)){ #Lags/2 because I want half values before and half after
#Check to see if you can go backwards. If not, go foward (i.e. use lead).
if(is.na(lag(Variable, i))){
LoopSum = lead(Variable, i)
}
else{
LoopSum = lag(Variable, i)
}
Sum = Sum + LoopSum
}
for(i in 1:(Lags/2)){
if(is.na(lead(Variable, i))){ #Check to see if you can go forward. If not, go backwards (i.e. use lag).
LoopSum = lag(Variable, i)
}
else{
LoopSum = lead(Variable, i)
}
Sum = Sum + LoopSum
}
Sum
}
When I do RecentSum(frame$Sale,2) I get [1] 6 10 14 18 NA which is wrong for a number of reasons:
My if statements are only hitting on week one, so it will always be NA for lag and always be non-NA for lead.
I need to have a way to see if it uses lag/lead the first time. The first value is 6 instead of 8 because the first for-loop sends it to lead(_,1), but then the second for-loop does the same. I can't think of how I'd make my second for-loop recognize this.
Is there a function or library (Zoo?) that makes this task easy? I'd like to get my own function to work for the sake of practice/understanding, but at this point I'd rather just get it done.
Thanks!
To elaborate on my comment, lead and lag are functions that are meant to be used within vectorized functions such as dplyr. Here is a way to do it within dplyr without using a function:
df <- tibble(week = Week, sales = Sales)
df %>%
mutate(recent = case_when(is.na(lag(sales)) ~ lead(sales, n = 1) + lead(sales, n = 2),
is.na(lead(sales)) ~ lag(sales, n = 1) + lag(sales, n = 2),
TRUE ~ lag(sales) + lead(sales)))
That gives you this:
# A tibble: 5 x 3
week sales recent
<int> <dbl> <dbl>
1 1 1 8
2 2 3 6
3 3 5 10
4 4 7 14
5 5 9 12
1) Assuming that k is even define to as a vector of indices such that for each element of to we sum the k+1 elements of Sales that end in that index and from that subtract Sales:
k <- 2 # number of elements to sum
n <- nrow(frame)
to <- pmax(k+1, pmin(1:n + k/2, n))
Sum <- function(to, Sales) sum(Sales[seq(to = to, length = k+1)])
frame %>% mutate(recent = sapply(to, Sum, Sales) - Sales)
giving:
Week Sales recent
1 1 1 8
2 2 3 6
3 3 5 10
4 4 7 14
5 5 9 12
Note that by replacing the last line of code above with the following line the solution can be done entirely in base R:
transform(frame, recent = sapply(to, Sum, Sales) - Sales)
2) This concatenates the appropriate elements before and after the Sales series so that an ordinary rolling sum gives the result.
library(zoo)
ix <- c(seq(to = k+1, length = k/2), 1:n, seq(to = n-k, length = k/2))
frame %>% mutate(recent = rollsum(Sales[ix], k+1) - Sales)
Note that if k=2 then it reduces this to this one-liner:
frame %>% mutate(recent = rollsum(Sales[c(3, 1:n(), n()-2)], 3) - Sales)
giving:
Week Sales recent
1 1 1 8
2 2 3 6
3 3 5 10
4 4 7 14
5 5 9 12
Update: fixed for k > 2
I would like to aggregate an R data.frame by equal amounts of the cumulative sum of one of the variables in the data.frame. I googled quite a lot, but probably I don't know the correct terminology to find anything useful.
Suppose I have this data.frame:
> x <- data.frame(cbind(p=rnorm(100, 10, 0.1), v=round(runif(100, 1, 10))))
> head(x)
p v
1 10.002904 4
2 10.132200 2
3 10.026105 6
4 10.001146 2
5 9.990267 2
6 10.115907 6
7 10.199895 9
8 9.949996 8
9 10.165848 8
10 9.953283 6
11 10.072947 10
12 10.020379 2
13 10.084002 3
14 9.949108 8
15 10.065247 6
16 9.801699 3
17 10.014612 8
18 9.954638 5
19 9.958256 9
20 10.031041 7
I would like to reduce the x to a smaller data.frame where each line contains the weighted average of p, weighted by v, corresponding to an amount of n units of v. Something of this sort:
> n <- 100
> cum.v <- cumsum(x$v)
> f <- cum.v %/% n
> x.agg <- aggregate(cbind(v*p, v) ~ f, data=x, FUN=sum)
> x.agg$'v * p' <- x.agg$'v * p' / x.agg$v
> x.agg
f v * p v
1 0 10.039369 98
2 1 9.952049 94
3 2 10.015058 104
4 3 9.938271 103
5 4 9.967244 100
6 5 9.995071 69
First question, I was wondering if there is a better (more efficient approach) to the code above. The second, more important, question is how to correct the code above in order to obtain more precise bucketing. Namely, each row in x.agg should contain exacly 100 units of v, not just approximately as it is the case above. For example, the first row contains the aggregate of the first 17 rows of x which correspond to 98 units of v. The next row (18th) contains 5 units of v and is fully included in the next bucket. What I would like to achieve instead would be attribute 2 units of row 18th to the first bucket and the remaining 3 units to the following one.
Thanks in advance for any help provided.
Here's another method that does this with out repeating each p v times. And the way I understand it is, the place where it crosses 100 (see below)
18 9.954638 5 98
19 9.958256 9 107
should be changed to:
18 9.954638 5 98
19.1 9.958256 2 100 # ---> 2 units will be considered with previous group
19.2 9.958256 7 107 # ----> remaining 7 units will be split for next group
The code:
n <- 100
# get cumulative sum, an id column (for retrace) and current group id
x <- transform(x, cv = cumsum(x$v), id = seq_len(nrow(x)), grp = cumsum(x$v) %/% n)
# Paste these two lines in R to install IRanges
source("http://bioconductor.org/biocLite.R")
biocLite("IRanges")
require(IRanges)
ir1 <- successiveIRanges(x$v)
ir2 <- IRanges(seq(n, max(x$cv), by=n), width=1)
o <- findOverlaps(ir1, ir2)
# gets position where multiple of n(=100) occurs
# (where we'll have to do something about it)
pos <- queryHits(o)
# how much do the values differ from multiple of 100?
val <- start(ir2)[subjectHits(o)] - start(ir1)[queryHits(o)] + 1
# we need "pos" new rows of "pos" indices
x1 <- x[pos, ]
x1$v <- val # corresponding values
# reduce the group by 1, so that multiples of 100 will
# belong to the previous row
x1$grp <- x1$grp - 1
# subtract val in the original data x
x$v[pos] <- x$v[pos] - val
# bind and order them
x <- rbind(x1,x)
x <- x[with(x, order(id)), ]
# remove unnecessary entries
x <- x[!(duplicated(x$id) & x$v == 0), ]
x$cv <- cumsum(x$v) # updated cumsum
x$id <- NULL
require(data.table)
x.dt <- data.table(x, key="grp")
x.dt[, list(res = sum(p*v)/sum(v), cv = tail(cv, 1)), by=grp]
Running on your data:
# grp res cv
# 1: 0 10.037747 100
# 2: 1 9.994648 114
Running on #geektrader's data:
# grp res cv
# 1: 0 9.999680 100
# 2: 1 10.040139 200
# 3: 2 9.976425 300
# 4: 3 10.026622 400
# 5: 4 10.068623 500
# 6: 5 9.982733 562
Here's a benchmark on a relatively big data:
set.seed(12345)
x <- data.frame(cbind(p=rnorm(1e5, 10, 0.1), v=round(runif(1e5, 1, 10))))
require(rbenchmark)
benchmark(out <- FN1(x), replications=10)
# test replications elapsed relative user.self
# 1 out <- FN1(x) 10 13.817 1 12.586
It takes about 1.4 seconds on 1e5 rows.
If you are looking for precise bucketing, I am assuming value of p is same for 2 "split" v
i.e. in your example, value of p for 2 units of row 18th that go in first bucket is 9.954638
With above assumption, you can do following for not super large datasets..
> set.seed(12345)
> x <- data.frame(cbind(p=rnorm(100, 10, 0.1), v=round(runif(100, 1, 10))))
> z <- unlist(mapply(function(x,y) rep(x,y), x$p, x$v, SIMPLIFY=T))
this creates a vector with each value of p repeated v times for each row and result is combined into single vector using unlist.
After this aggregation is trivial using aggregate function
> aggregate(z, by=list((1:length(z)-0.5)%/%100), FUN=mean)
Group.1 x
1 0 9.999680
2 1 10.040139
3 2 9.976425
4 3 10.026622
5 4 10.068623
6 5 9.982733
I am trying to calculated the lagged difference (or actual increase) for data that has been inadvertently aggregated. Each successive year in the data includes values from the previous year. A sample data set can be created with this code:
set.seed(1234)
x <- data.frame(id=1:5, value=sample(20:30, 5, replace=T), year=3)
y <- data.frame(id=1:5, value=sample(10:19, 5, replace=T), year=2)
z <- data.frame(id=1:5, value=sample(0:9, 5, replace=T), year=1)
(df <- rbind(x, y, z))
I can use a combination of lapply() and split() to calculate the difference between each year for every unique id, like so:
(diffs <- lapply(split(df, df$id), function(x){-diff(x$value)}))
However, because of the nature of the diff() function, there are no results for the values in year 1, which means that after I flatten the diffs list of lists with Reduce(), I cannot add the actual yearly increases back into the data frame, like so:
df$actual <- Reduce(c, diffs) # flatten the list of lists
In this example, there are only 10 calculated differences or lags, while there are 15 rows in the data frame, so R throws an error when trying to add a new column.
How can I create a new column of actual increases with (1) the values for year 1 and (2) the calculated diffs/lags for all subsequent years?
This is the output I'm eventually looking for. My diffs list of lists calculates the actual values for years 2 and 3 just fine.
id value year actual
1 21 3 5
2 26 3 16
3 26 3 14
4 26 3 10
5 29 3 14
1 16 2 10
2 10 2 5
3 12 2 10
4 16 2 7
5 15 2 13
1 6 1 6
2 5 1 5
3 2 1 2
4 9 1 9
5 2 1 2
I think this will work for you. When you run into the diff problem just lengthen the vector by putting 0 in as the first number.
df <- df[order(df$id, df$year), ]
sdf <-split(df, df$id)
df$actual <- as.vector(sapply(seq_along(sdf), function(x) diff(c(0, sdf[[x]][,2]))))
df[order(as.numeric(rownames(df))),]
There's lots of ways to do this but this one is fairly fast and uses base.
Here's a second & third way of approaching this problem utilizing aggregate and by:
aggregate:
df <- df[order(df$id, df$year), ]
diff2 <- function(x) diff(c(0, x))
df$actual <- c(unlist(t(aggregate(value~id, df, diff2)[, -1])))
df[order(as.numeric(rownames(df))),]
by:
df <- df[order(df$id, df$year), ]
diff2 <- function(x) diff(c(0, x))
df$actual <- unlist(by(df$value, df$id, diff2))
df[order(as.numeric(rownames(df))),]
plyr
df <- df[order(df$id, df$year), ]
df <- data.frame(temp=1:nrow(df), df)
library(plyr)
df <- ddply(df, .(id), transform, actual=diff2(value))
df[order(-df$year, df$temp),][, -1]
It gives you the final product of:
> df[order(as.numeric(rownames(df))),]
id value year actual
1 1 21 3 5
2 2 26 3 16
3 3 26 3 14
4 4 26 3 10
5 5 29 3 14
6 1 16 2 10
7 2 10 2 5
8 3 12 2 10
9 4 16 2 7
10 5 15 2 13
11 1 6 1 6
12 2 5 1 5
13 3 2 1 2
14 4 9 1 9
15 5 2 1 2
EDIT: Avoiding the Loop
May I suggest avoiding the loop and turning what I gave to you into a function (the by solution is the easiest one for me to work with) and sapply that to the two columns you desire.
set.seed(1234) #make new data with another numeric column
x <- data.frame(id=1:5, value=sample(20:30, 5, replace=T), year=3)
y <- data.frame(id=1:5, value=sample(10:19, 5, replace=T), year=2)
z <- data.frame(id=1:5, value=sample(0:9, 5, replace=T), year=1)
df <- rbind(x, y, z)
df <- df.rep <- data.frame(df[, 1:2], new.var=df[, 2]+sample(1:5, nrow(df),
replace=T), year=df[, 3])
df <- df[order(df$id, df$year), ]
diff2 <- function(x) diff(c(0, x)) #function one
group.diff<- function(x) unlist(by(x, df$id, diff2)) #answer turned function
df <- data.frame(df, sapply(df[, 2:3], group.diff)) #apply group.diff to col 2:3
df[order(as.numeric(rownames(df))),] #reorder it
Of course you'd have to rename these unless you used transform as in:
df <- df[order(df$id, df$year), ]
diff2 <- function(x) diff(c(0, x)) #function one
group.diff<- function(x) unlist(by(x, df$id, diff2)) #answer turned function
df <- transform(df, actual=group.diff(value), actual.new=group.diff(new.var))
df[order(as.numeric(rownames(df))),]
This would depend on how many variables you were doing this to.
1) diff.zoo. With the zoo package its just a matter of converting it to zoo using split= and then performing the diff :
library(zoo)
zz <- zz0 <- read.zoo(df, split = "id", index = "year", FUN = identity)
zz[2:3, ] <- diff(zz)
It gives the following (in wide form rather than the long form you mentioned) where each column is an id and each row is a year minus the prior year:
> zz
1 2 3 4 5
1 6 5 2 9 2
2 10 5 10 7 13
3 5 16 14 10 14
The wide form shown may actually be preferable but you can convert it to long form if you want that like this:
dt <- function(x) as.data.frame.table(t(x))
setNames(cbind(dt(zz), dt(zz0)[3]), c("id", "year", "value", "actual"))
This puts the years in ascending order which is the convention normally used in R.
2) rollapply. Also using zoo this alternative uses a rolling calculation to add the actual column to your data. It assumes the data is structured as you show with the same number of years in each group arranged in order:
df$actual <- rollapply(df$value, 6, partial = TRUE, align = "left",
FUN = function(x) if (length(x) < 6) x[1] else x[1]-x[6])
3) subtraction. Making the same assumptions as in the prior solution we can further simplify it to just this which subtracts from each value the value 5 positions hence:
transform(df, actual = value - c(tail(value, -5), rep(0, 5)))
or this variation:
transform(df, actual = replace(value, year > 1, -diff(ts(value), 5)))
EDIT: added rollapply and subtraction solutions.
Kind of hackish but keeping in place your wonderful Reduce you could add mock rows to your df for year 0:
mockRows <- data.frame(id = 1:5, value = 0, year = 0)
(df <- rbind(df, mockRows))
(df <- df[order(df$id, df$year), ])
(diffs <- lapply(split(df, df$id), function(x){diff(x$value)}))
(df <- df[df$year != 0,])
(df$actual <- Reduce(c, diffs)) # flatten the list of lists
df[order(as.numeric(rownames(df))),]
This is the output:
id value year actual
1 1 21 3 5
2 2 26 3 16
3 3 26 3 14
4 4 26 3 10
5 5 29 3 14
6 1 16 2 10
7 2 10 2 5
8 3 12 2 10
9 4 16 2 7
10 5 15 2 13
11 1 6 1 6
12 2 5 1 5
13 3 2 1 2
14 4 9 1 9
15 5 2 1 2