Collapsing data frame rows - r

I have a data frame, a:
> a
GID 1 2 3 4 5 6 7 8 9 10
1 A 0.2616485 -0.1521032 -0.01281316 0.05029606 -0.01940657 0.1624178 0.14714370 0.08642461 -0.1191233 -0.01845165
2 A 0.0000000 0.0000000 0.00000000 0.00000000 0.00000000 0.0000000 0.00000000 0.00000000 0.0000000 0.00000000
3 B 0.1107565 0.1632392 -0.08760204 0.04475998 -0.10835387 -0.0827655 -0.08058794 0.15288650 -0.1120538 -0.02297707
4 B 0.0000000 0.0000000 0.00000000 0.00000000 0.00000000 0.0000000 0.00000000 0.00000000 0.0000000 0.00000000
And I'm looking for a way to get a new data frame (b) which sums the rows of data frame a by the GID column in a:
> b
GID 1 2 3 4 5 6 7 8 9 10
1 A 0.2616485 -0.1521032 -0.01281316 0.05029606 -0.01940657 0.1624178 0.14714370 0.08642461 -0.1191233 -0.01845165
3 B 0.1107565 0.1632392 -0.08760204 0.04475998 -0.10835387 -0.0827655 -0.08058794 0.15288650 -0.1120538 -0.02297707
Preferably using a base solution.

Use aggregate:
aggregate(.~GID,data=dat,sum)
EDIT
For better performance, you can try data.table:
library(data.table)
DT <- as.data.table(dat)
DT[,lapply(.SD,sum),by='GID']

Related

R function rolling division

I can perform a rolling division on vectors, i can take my data, then lag the same data and perform the division:
# Dummy Data
sample <- c(4,5,6,7,8,4,2,6,5,4,3,2,1,2,3,4,5,6)
lagSam <- lag(sample) # Lag by 1
output <- sample / lagSam # Perform division
sample.df <- data.frame(sample, desired = output)
with the desired output:
sample desired
1 4 NA
2 5 1.2500000
3 6 1.2000000
4 7 1.1666667
5 8 1.1428571
6 4 0.5000000
7 2 0.5000000
8 6 3.0000000
9 5 0.8333333
10 4 0.8000000
11 3 0.7500000
12 2 0.6666667
13 1 0.5000000
14 2 2.0000000
15 3 1.5000000
16 4 1.3333333
17 5 1.2500000
18 6 1.2000000
My question is, how do I do the same on a data frame? I have over 100 columns and need to take a rolling division of each. I am trying to write a function and use roll apply:
# My attempt
division <- function(x) {
#tail(x,1) / head(x,1)
x / lag(x)
}
rollapplyr(sample.df$sample, 1, division, fill = NA)
I tried to lag it with head and tail and then again with x / lag of x.
Both results produce NA's.
I frequently find that I want the dplyr::lag behavior but am startled a little when stats::lag returns a tsp-like object.
You're close with tail/head functionality. Using negative n:
n: a single integer. If positive, size for the resulting object:
number of elements for a vector (including lists), rows for a
matrix or data frame or lines for a function. If negative,
all but the 'n' last/first number of elements of 'x'.
c(NA, tail(sample.df$sample,n=-1) / head(sample.df$sample,n=-1))
# [1] NA 1.2500000 1.2000000 1.1666667 1.1428571 0.5000000 0.5000000
# [8] 3.0000000 0.8333333 0.8000000 0.7500000 0.6666667 0.5000000 2.0000000
# [15] 1.5000000 1.3333333 1.2500000 1.2000000
Note that your thought to do a rolling apply is slightly problematic in that it will divide a number by the result of the previous division, not the previous value. That is, c(2,3,4) in a rolling divide should start with an initial value (say, 1, the division-identity), and go with something like c(2/1, 3/(2/1), 4/(3/(2/1))), not what (I think) you requested.
This drives the function to be:
division <- function(x) c(NA, tail(x,n=-1) / head(x,n=-1))
So then you can do
lapply(sample.df, division)
If you only want to run this on select columns, I'd do
ind <- 1
lapply(sample.df[ind], division)
# $sample
# [1] NA 1.2500000 1.2000000 1.1666667 1.1428571 0.5000000 0.5000000
# [8] 3.0000000 0.8333333 0.8000000 0.7500000 0.6666667 0.5000000 2.0000000
# [15] 1.5000000 1.3333333 1.2500000 1.2000000
cbind(sample.df, lapply(sample.df[ind], division))
# sample desired sample
# 1 4 1 NA
# 2 5 1 1.2500000
# 3 6 1 1.2000000
# 4 7 1 1.1666667
# 5 8 1 1.1428571
# 6 4 1 0.5000000
# 7 2 1 0.5000000
# 8 6 1 3.0000000
# 9 5 1 0.8333333
# 10 4 1 0.8000000
# 11 3 1 0.7500000
# 12 2 1 0.6666667
# 13 1 1 0.5000000
# 14 2 1 2.0000000
# 15 3 1 1.5000000
# 16 4 1 1.3333333
# 17 5 1 1.2500000
# 18 6 1 1.2000000
This of course generated a duplicate name, but it's a start.
BTW: rolling generally has to do with a cumulative process on a vector. What you are looking for is applying one function to each vector individually and capturing the response.
Here are a few ways:
1) diff
transform(sample.df, desired = c(NA, exp(diff(log(sample)))))
## sample desired
## 1 4 NA
## 2 5 1.2500000
## 3 6 1.2000000
## 4 7 1.1666667
## 5 8 1.1428571
## ... etc ...
To apply this to multiple columns using the built in data frame anscombe:
rbind(NA, exp(diff(log(as.matrix(anscombe)))))
2) diff.zoo
library(zoo)
z <- zoo(sample.df$sample)
merge(z, desired = diff(z, arith = FALSE), all = TRUE)
## z desired
## 1 4 NA
## 2 5 1.2500000
## 3 6 1.2000000
## 4 7 1.1666667
## 5 8 1.1428571
## ... etc ...
To apply it to all columns of anscombe:
z <- zoo(rbind(NA, anscombe))
diff(z, arith = FALSE)
3) dplyr
library(dplyr)
sample.df %>% mutate(desired = sample/lag(sample))
## sample desired
## 1 4 NA
## 2 5 1.2500000
## 3 6 1.2000000
## 4 7 1.1666667
## 5 8 1.1428571
## ... etc ...
To apply this to all columns of anscombe:
anscombe %>% mutate_all(funs(. / lag(.)))
4) rollapplyr
library(zoo)
transform(sample.df, desired = rollapplyr(sample, 2, function(x) x[2]/x[1], fill = NA))
## sample desired
## 1 4 NA
## 2 5 1.2500000
## 3 6 1.2000000
## 4 7 1.1666667
## 5 8 1.1428571
To apply it to all columns of anscombe:
rollapplyr(anscombe, 2, function(x) x[2]/x[1], fill = NA))
5) lag.ts
transform(sample.df, desired = c(NA, lag(ts(sample)) / ts(sample)))
## sample desired
## 1 4 NA
## 2 5 1.2500000
## 3 6 1.2000000
## 4 7 1.1666667
## 5 8 1.1428571
To apply it to all columns of anscombe use the following. Note that dplyr should NOT be loaded since it annoyingly clobbers lag with its own lag. Alternately use stats::lag:
lag(ts(anscombe)) / ts(anscombe)

How to turn an rpart object into a dendrogram? (as.dendrogram.rpart ?))

I would like a way to turn an rpart tree object into a nested list of lists (a dendrogram). Ideally, the attributes in each node will include the information in the rpart object (impurity, variable and rule that is used for splitting, the number of observations funneled to that node, etc.).
Looking at the rpart$frame object, it is not clear to me how to read it. Any suggestions?
Tiny example:
library(rpart)
fit <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis)
fit$frame
var n wt dev yval complexity ncompete nsurrogate yval2.V1 yval2.V2 yval2.V3 yval2.V4 yval2.V5 yval2.nodeprob
1 Start 81 81 17 1 0.17647059 2 1 1.00000000 64.00000000 17.00000000 0.79012346 0.20987654 1.00000000
2 Start 62 62 6 1 0.01960784 2 2 1.00000000 56.00000000 6.00000000 0.90322581 0.09677419 0.76543210
4 <leaf> 29 29 0 1 0.01000000 0 0 1.00000000 29.00000000 0.00000000 1.00000000 0.00000000 0.35802469
5 Age 33 33 6 1 0.01960784 2 2 1.00000000 27.00000000 6.00000000 0.81818182 0.18181818 0.40740741
10 <leaf> 12 12 0 1 0.01000000 0 0 1.00000000 12.00000000 0.00000000 1.00000000 0.00000000 0.14814815
11 Age 21 21 6 1 0.01960784 2 0 1.00000000 15.00000000 6.00000000 0.71428571 0.28571429 0.25925926
22 <leaf> 14 14 2 1 0.01000000 0 0 1.00000000 12.00000000 2.00000000 0.85714286 0.14285714 0.17283951
23 <leaf> 7 7 3 2 0.01000000 0 0 2.00000000 3.00000000 4.00000000 0.42857143 0.57142857 0.08641975
3 <leaf> 19 19 8 2 0.01000000 0 0 2.00000000 8.00000000 11.00000000 0.42105263 0.57894737 0.23456790
(the function ggdendro:::dendro_data.rpart might be helpful somehow, but I couldn't get it to really solve the problem)
Here is a GitHub gist with the function rpart2dendro for converting an object of class "rpart" to a dendrogram. Note that branches are not weighted in the output object, but it should be fairly straightforward to recursively modify the "height" attributes of the dendrogram to get proportional branch lengths. The Kyphosis example is provided at the bottom.

Ada in R giving me single classification

I am using the function ada in R, and I'm having a little difficulty. I have training data that looks like this
V13 V15 V17 V19
1 0.017241379 0.471264368 0.01449275 0.24637681
2 0.255813953 0.011627907 0.06849315 0.05479452
3 0.040000000 0.400000000 0.06000000 0.10000000
4 0.500000000 0.000000000 0.05128205 0.00000000
5 0.102040816 0.367346939 0.05769231 0.19230769
6 0.561403509 0.105263158 0.11111111 0.00000000
7 0.300813008 0.048780488 0.12222222 0.03333333
8 0.000000000 0.714285714 0.14285714 0.07142857
9 0.328947368 0.013157895 0.01492537 0.00000000
10 0.536585366 0.060975610 0.16071429 0.03571429
11 0.338461538 0.030769231 0.11764706 0.03921569
12 0.033898305 0.322033898 0.11764706 0.21568627
This is what I have stored in the variable
matrix.x
Then I have the response variables y
y
1 1
2 -1
3 1
4 -1
5 1
6 -1
7 -1
8 1
9 -1
10 -1
11 -1
12 1
I simply run the following:
ada.obj = ada(matrix.x, matrix.y)
And then
ada.pred = predict(ada.obj, matrix.x)
And for some reason, I get a matrix with all 1s or all -1s. What am I doing wrong? Ideally, I want the ada.pred to spit out the exact classifications of the training data.
Thanks.
Also how would I go about using the AdabOost1.M1 function in caret package of R?

Exclude zero values from a ggplot barplot?

does anyone know if it is possible to exclude zero values from a barplot in ggplot?
I have a dataset that contains proportions as follows:
X5employf prop X5employff
1 increase 0.02272727
2 increase 0.59090909 1
3 increase 0.02272727 1 and 8
4 increase 0.02272727 2
5 increase 0.34090909 3
6 increase 0.00000000 4
7 increase 0.00000000 5
8 increase 0.00000000 6
9 increase 0.00000000 6 and 7
10 increase 0.00000000 6 and 7
11 increase 0.00000000 7
12 increase 0.00000000 8
13 decrease 0.00000000
14 decrease 0.00000000 1
15 decrease 0.00000000 1 and 8
16 decrease 0.00000000 2
17 decrease 0.00000000 3
18 decrease 0.10000000 4
19 decrease 0.50000000 5
20 decrease 0.20000000 6
21 decrease 0.00000000 6 and 7
22 decrease 0.00000000 6 and 7
23 decrease 0.10000000 7
24 decrease 0.10000000 8
25 same 0.00000000
26 same 0.00000000 1
27 same 0.00000000 1 and 8
28 same 0.00000000 2
29 same 0.00000000 3
30 same 0.21052632 4
31 same 0.31578947 5
32 same 0.26315789 6
33 same 0.15789474 6 and 7
34 same 0.00000000 6 and 7
35 same 0.05263158 7
36 same 0.00000000 8
as you can see in the 'prop' column there are a lot of zero values. I am producing a facetted bar plot with 'X5employf' column as the facet. But because of the zero values I end up with a lot of empty space on my plot(see below). Is there a way of forcing ggplot to not plot the zero values? Its not the case of dropping unused factors as these are not NA values but 0s. Any ideas??!
For your plot, simply use which to specify that you only want to use the subset of the dataframe containing non-zero proportions. This way you don't have to modify your original dataframe. Then, specify "free_x" in your scales argument within facet_grid to get rid of your empty space in your faceted plot.
plot <- ggplot(df[which(df$prop>0),], aes(X5employff, prop)) +
geom_bar(aes(fill=X5employff, stat="identity")) +
facet_grid( ~ X5employf, scales="free_x") +
theme_bw()
plot
Note that I replaced the blank fields with "blank" for the sake of quick import into R from Excel.
I'm unsure whether or not there is a way to set ignored values in ggplot. However you could consider simply recoding 0's to NA:
df[df$prop == 0] <- NA

How to create a table by restructuring a MALLET output file?

I'm using MALLET for topic analysis which is outputting results in text files ("topics.txt") of several thousand rows and a hundred or so rows where each row consists of tab-separated variables like this:
Num1 text1 topic1 proportion1 topic2 proportion2 topic3 proportion3, etc.
Num2 text2 topic1 proportion1 topic2 proportion2 topic3 proportion3, etc.
Num3 text3 topic1 proportion1 topic2 proportion2 topic3 proportion3, etc.
Here's a snippet of the actual data:
> dat[1:5,1:10]
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 0 10.txt 27 0.4560785 23 0.3040853 20 0.1315621 21 0.03632624
2 1 1001.txt 20 0.2660085 12 0.2099153 8 0.1699586 13 0.16922928
3 2 1002.txt 16 0.3341721 2 0.1747023 10 0.1360454 12 0.07507119
4 3 1003.txt 12 0.5366148 8 0.2255179 18 0.1388561 0 0.01867091
5 4 1005.txt 16 0.2363206 0 0.2214441 24 0.1914769 7 0.17760521
I'm trying to use R to convert this output into a data table where the topics are column headers and each topic contains the values of the variable 'proportion' directly to the right hand side of each variable 'topic', for each value of 'text'. Like this:
topic1 topic2 topic3
text1 proportion1 proportion2 proportion3
text2 proportion1 proportion2 proportion3
or with the data snippet above, like so:
0 2 7 8 10 12 13 16 18 20 21 23 24 27
10.txt 0 0 0 0 0 0 0 0 0 0.1315621 0.03632624 0.3040853 0 0.4560785
1001.txt 0 0 0 0.1699586 0 0.2099153 0.1692292 0 0 0.2660085 0 0 0 0
1002.txt 0 0.1747023 0 0 0.1360454 0.0750711 0 0.3341721 0 0 0 0 0 0
1003.txt 0.0186709 0 0 0.2255179 0 0.5366148 0 0 0.138856 0 0 0 0 0
1005.txt 0.2214441 0 0.1776052 0 0 0 0 0.2363206 0 0 0 0 0.1914769 0
This is the R code I've got to do the job, sent from a friend, but it doesn't work for me (and I don't know enough about it to fix it myself):
##########################################
dat<-read.table("topics.txt", header=F, sep="\t")
datnames<-subset(dat, select=2)
dat2<-subset(dat, select=3:length(dat))
y <- data.frame(topic=character(0),proportion=character(0),text=character(0))
for(i in seq(1, length(dat2), 2)){
z<-i+1
x<-dat2[,i:z]
x<-cbind(x, datnames)
colnames(x)<-c("topic","proportion", "text")
y<-rbind(y, x)
}
# Right at this step at the end of the block
# I get this message that may indicate the problem:
# Error in c(in c("topic", "proportion", "text") : unused argument(s) ("text")
y[is.na(y)] <- 0
xdat<-xtabs(proportion ~ text+topic, data=y)
write.table(xdat, file="topicMatrix.txt", sep="\t", eol = "\n", quote=TRUE, col.names=TRUE, row.names=TRUE)
##########################################
I'd be most grateful for any suggestions on how I can get this code working. My problem may be related to this one and possibly this one also, but I don't yet have the skills to make immediate use of the answers to those questions.
Here is one approach to your problem
dat <-read.table(as.is = TRUE, header = FALSE, textConnection(
"Num1 text1 topic1 proportion1 topic2 proportion2 topic3 proportion3
Num2 text2 topic1 proportion1 topic2 proportion2 topic3 proportion3
Num3 text3 topic1 proportion1 topic2 proportion2 topic3 proportion3"))
NTOPICS = 3
nam <- c('num', 'text',
paste(c('topic', 'proportion'), rep(1:NTOPICS, each = 2), sep = ""))
dat_l <- reshape(setNames(dat, nam), varying = 3:length(nam), direction = 'long',
sep = "")
reshape2::dcast(dat_l, num + text ~ topic, value_var = 'proportion')
num text topic1 topic2 topic3
1 Num1 text1 proportion1 proportion2 proportion3
2 Num2 text2 proportion1 proportion2 proportion3
3 Num3 text3 proportion1 proportion2 proportion3
EDIT. This will work irrespective of whether the proportions are text or numbers. You can also modify NTOPICS to suit the number of topics you have
You can get this into a long format but to go further required real data.
EDITED after data offered. Still not sure about the overall structure of what is coming from MALLET, but at least the R functions are demonstrated. This approach has the "feature" that proportions are summed if there are overlapping topics. Depending on the data layout that may be an advantage or not.
dat <-read.table(textConnection(" V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 0 10.txt 27 0.4560785 23 0.3040853 20 0.1315621 21 0.03632624
2 1 1001.txt 20 0.2660085 12 0.2099153 8 0.1699586 13 0.16922928
3 2 1002.txt 16 0.3341721 2 0.1747023 10 0.1360454 12 0.07507119
4 3 1003.txt 12 0.5366148 8 0.2255179 18 0.1388561 0 0.01867091
5 4 1005.txt 16 0.2363206 0 0.2214441 24 0.1914769 7 0.17760521
"),
header=TRUE)
ldat <- reshape(dat, idvar=1:2, varying=list(topics=c("V3", "V5", "V7", "V9"),
props=c("V4", "V6", "V8", "V10")),
direction="long")
####------------------####
> ldat
V1 V2 time V3 V4
0.10.txt.1 0 10.txt 1 27 0.45607850
1.1001.txt.1 1 1001.txt 1 20 0.26600850
2.1002.txt.1 2 1002.txt 1 16 0.33417210
3.1003.txt.1 3 1003.txt 1 12 0.53661480
4.1005.txt.1 4 1005.txt 1 16 0.23632060
0.10.txt.2 0 10.txt 2 23 0.30408530
1.1001.txt.2 1 1001.txt 2 12 0.20991530
2.1002.txt.2 2 1002.txt 2 2 0.17470230
3.1003.txt.2 3 1003.txt 2 8 0.22551790
4.1005.txt.2 4 1005.txt 2 0 0.22144410
0.10.txt.3 0 10.txt 3 20 0.13156210
1.1001.txt.3 1 1001.txt 3 8 0.16995860
2.1002.txt.3 2 1002.txt 3 10 0.13604540
3.1003.txt.3 3 1003.txt 3 18 0.13885610
4.1005.txt.3 4 1005.txt 3 24 0.19147690
0.10.txt.4 0 10.txt 4 21 0.03632624
1.1001.txt.4 1 1001.txt 4 13 0.16922928
2.1002.txt.4 2 1002.txt 4 12 0.07507119
3.1003.txt.4 3 1003.txt 4 0 0.01867091
4.1005.txt.4 4 1005.txt 4 7 0.17760521
Now can show you how to use xtabs() since those "proportions" are "numeric". Something like this may eventually be what you want. I was surprised that the topics were also integers but perhaps there is a mapping from topic numbers to topic names?:
> xtabs(V4 ~ V3 + V2, data=ldat)
V2
V3 10.txt 1001.txt 1002.txt 1003.txt 1005.txt
0 0.00000000 0.00000000 0.00000000 0.01867091 0.22144410
2 0.00000000 0.00000000 0.17470230 0.00000000 0.00000000
7 0.00000000 0.00000000 0.00000000 0.00000000 0.17760521
8 0.00000000 0.16995860 0.00000000 0.22551790 0.00000000
10 0.00000000 0.00000000 0.13604540 0.00000000 0.00000000
12 0.00000000 0.20991530 0.07507119 0.53661480 0.00000000
13 0.00000000 0.16922928 0.00000000 0.00000000 0.00000000
16 0.00000000 0.00000000 0.33417210 0.00000000 0.23632060
18 0.00000000 0.00000000 0.00000000 0.13885610 0.00000000
20 0.13156210 0.26600850 0.00000000 0.00000000 0.00000000
21 0.03632624 0.00000000 0.00000000 0.00000000 0.00000000
23 0.30408530 0.00000000 0.00000000 0.00000000 0.00000000
24 0.00000000 0.00000000 0.00000000 0.00000000 0.19147690
27 0.45607850 0.00000000 0.00000000 0.00000000 0.00000000
Returning to this problem, I've found that the reshape function is far too demanding on memory, so I use a data.table method instead. A few more steps, but a huge amount faster and substantially less memory intensive.
dat <- read.table(text = "V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 0 10.txt 27 0.4560785 23 0.3040853 20 0.1315621 21 0.03632624
2 1 1001.txt 20 0.2660085 12 0.2099153 8 0.1699586 13 0.16922928
3 2 1002.txt 16 0.3341721 2 0.1747023 10 0.1360454 12 0.07507119
4 3 1003.txt 12 0.5366148 8 0.2255179 18 0.1388561 0 0.01867091
5 4 1005.txt 16 0.2363206 0 0.2214441 24 0.1914769 7 0.17760521")
dat$V11 <- rep(NA, 5) # my real data has this extra unwanted col
dat <- data.table(dat)
# get document number
docnum <- dat$V1
# get text number
txt <- dat$V2
# remove doc num and text num so we just have topic and props
dat1 <- dat[ ,c("V1","V2", paste0("V", ncol(dat))) := NULL]
# get topic numbers
n <- ncol(dat1)
tops <- apply(dat1, 1, function(i) i[seq(1, n, 2)])
# get props
props <- apply(dat1, 1, function(i) i[seq(2, n, 2)])
# put topics and props together
tp <- lapply(1:ncol(tops), function(i) data.frame(tops[,i], props[,i]))
names(tp) <- txt
# make into long table
dt <- data.table::rbindlist(tp)
dt$doc <- unlist(lapply(txt, function(i) rep(i, ncol(dat1)/2)))
dt$docnum <- unlist(lapply(docnum, function(i) rep(i, ncol(dat1)/2)))
# reshape to wide
library(data.table)
setkey(dt, tops...i., doc)
out <- dt[CJ(unique(tops...i.), unique(doc))][, as.list(props...i.), by=tops...i.]
setnames(out, c("topic", as.character(txt)))
# transpose to have table of docs (rows) and columns (topics)
tout <- data.table(t(out))
setnames(tout, unname(as.character(tout[1,])))
tout <- tout[-1,]
row.names(tout) <- txt
# replace NA with zero
tout[is.na(tout)] <- 0
And here's the output, docs as rows, topics as columns, doc names are in the rownames, which are not printed, but available for later use.
tout
0 2 7 8 10 12 13 16 18
1: 0.00000000 0.0000000 0.0000000 0.0000000 0.0000000 0.00000000 0.0000000 0.0000000 0.0000000
2: 0.00000000 0.0000000 0.0000000 0.1699586 0.0000000 0.20991530 0.1692293 0.0000000 0.0000000
3: 0.00000000 0.1747023 0.0000000 0.0000000 0.1360454 0.07507119 0.0000000 0.3341721 0.0000000
4: 0.01867091 0.0000000 0.0000000 0.2255179 0.0000000 0.53661480 0.0000000 0.0000000 0.1388561
5: 0.22144410 0.0000000 0.1776052 0.0000000 0.0000000 0.00000000 0.0000000 0.2363206 0.0000000
20 21 23 24 27
1: 0.1315621 0.03632624 0.3040853 0.0000000 0.4560785
2: 0.2660085 0.00000000 0.0000000 0.0000000 0.0000000
3: 0.0000000 0.00000000 0.0000000 0.0000000 0.0000000
4: 0.0000000 0.00000000 0.0000000 0.0000000 0.0000000
5: 0.0000000 0.00000000 0.0000000 0.1914769 0.0000000

Resources