Get difference between grouped values in tall dataset - r

I have a data set set up like the example below:
Name df Value
A 1 .5
A 2 2
A 3 3
B 1 1
B 2 .5
I would like to get the difference between the values unitil the Name column changes then I would like it to stop and start getting the new differences. Like below:
Name df Value Diff
A 1 .5 NA
A 2 2 1.5
A 3 3 2.5
B 1 1 NA
B 2 .5 -.5
Is there any way I can do this? I have tried making the data set into wide format but I cannot figure out a way to make that work either.

An option would be to do a group by diff
library(dplyr)
df1 %>%
group_by(Name) %>%
mutate(Diff = c(NA, cumsum(diff(Value))))
# A tibble: 5 x 4
# Groups: Name [2]
# Name df Value Diff
# <chr> <int> <dbl> <dbl>
#1 A 1 0.5 NA
#2 A 2 2 1.5
#3 A 3 3 2.5
#4 B 1 1 NA
#5 B 2 0.5 -0.5
data
df1 <- structure(list(Name = c("A", "A", "A", "B", "B"), df = c(1L,
2L, 3L, 1L, 2L), Value = c(0.5, 2, 3, 1, 0.5)),
class = "data.frame", row.names = c(NA,
-5L))

#akrun answer is the way to go, but just as a riddle, this works too:
df1 %>%
group_by(Name) %>%
mutate(Diff = cumsum(Value - lag(Value, default = Value[1])))
# # A tibble: 5 x 4
# # Groups: Name [2]
# Name df Value Diff
# <chr> <int> <dbl> <dbl>
# 1 A 1 0.5 0
# 2 A 2 2 1.5
# 3 A 3 3 2.5
# 4 B 1 1 0
# 5 B 2 0.5 -0.5

Related

rowMeans() grouping by variable [duplicate]

This question already has answers here:
Calculate the mean by group
(9 answers)
Closed 12 months ago.
this is probably trivial, but my data looks like this:
t <- structure(list(var = 1:5, ID = c(1, 2, 1, 1, 3)), class = "data.frame", row.names = c(NA,
-5L))
> t
var ID
1 1 1
2 2 2
3 3 1
4 4 1
5 5 3
I would like to get a mean value for each ID, so my idea was to transform them into this (variable names are not important):
f <- structure(list(ID = 1:3, var.1 = c(1, 2, 5), var.2 = c(2, NA,
NA), var.3 = c(3, NA, NA)), class = "data.frame", row.names = c(NA,
-3L))
> f
ID var.1 var.2 var.3
1 1 1 2 3
2 2 2 NA NA
3 3 5 NA NA
so that I could then calculate the mean for each var.x.
I know it's possible with tidyr (possibly pivot_wider?), but I can't figure out how to group it. How do I get a mean value for each ID?
Thank you in advance
You could use ave to get the mean of var for each ID:
t$mean = ave(t$var, t$ID, FUN = mean)
Result:
var ID mean
1 1 1 2.666667
2 2 2 2.000000
3 3 1 2.666667
4 4 1 2.666667
5 5 3 5.000000
If you want a simple table with the means, you could use aggregate:
aggregate(formula = var~ID, data = t, FUN = mean)
ID var
1 1 2.666667
2 2 2.000000
3 3 5.000000
If you want to use rowMeans on your t dataframe, then we can first use pivot_wider, then get the mean of the row.
library(tidyverse)
t %>%
group_by(ID) %>%
mutate(row = row_number()) %>%
ungroup %>%
pivot_wider(names_from = row, values_from = var, names_prefix = "var.") %>%
mutate(mean = rowMeans(select(., starts_with("var")), na.rm = TRUE))
# ID var.1 var.2 var.3 mean
# <dbl> <int> <int> <int> <dbl>
# 1 1 1 3 4 2.67
# 2 2 2 NA NA 2
# 3 3 5 NA NA 5
Or since t is in long form, then we can just group by ID, then get the mean for all values in that group.
t %>%
group_by(ID) %>%
summarise(mean = mean(var))
# ID mean
# <dbl> <dbl>
#1 1 2.67
#2 2 2
#3 3 5
Or for f, we can use rowMeans for each row that will include any column that starts with var.
f %>%
mutate(mean = rowMeans(select(., starts_with("var")), na.rm = TRUE))
# ID var.1 var.2 var.3 mean
#1 1 1 2 3 2
#2 2 2 NA NA 2
#3 3 5 NA NA 5

R replace missing values if all are missing within a group

I would like to replace missing value from value in another column, if all values within a group are missing. Here is example and something I thought would work. There can be unlimited amount of groups.
library(tidyverse)
df <- tibble(ID = c("A", "A", "A", "B", "B", "B"),
val1 = c(1,2,3,4,5,6),
val2 = c(NA, NA, NA, NA, 2, 3))
df %>%
group_by(ID) %>%
mutate(val2 = ifelse(all(is.na(val2)), val1, val2))
# Groups: ID [2]
ID val1 val2
<chr> <dbl> <dbl>
1 A 1 1
2 A 2 1
3 A 3 1
4 B 4 NA
5 B 5 NA
6 B 6 NA
What I would like to get is val2 should get values from val1, if all val2 values are missing within group. Now it seems that it is giving me the first value. Nothing should happen if all are not missing.
Result:
# A tibble: 6 x 3
ID val1 val2
<chr> <dbl> <dbl>
1 A 1 1
2 A 2 2
3 A 3 3
4 B 4 NA
5 B 5 2
6 B 6 3
Does this work:
library(dplyr)
df %>% group_by(ID) %>% mutate(val2 = case_when(all(is.na(val2)) ~ val1, TRUE ~ val2))
# A tibble: 6 x 3
# Groups: ID [2]
ID val1 val2
<chr> <dbl> <dbl>
1 A 1 1
2 A 2 2
3 A 3 3
4 B 4 NA
5 B 5 2
6 B 6 3
You almost had it. I create an indicator which is used to replace the values:
df %>%
group_by(ID) %>%
mutate(val3 = ifelse(all(is.na(val2)),1,0)) %>%
ungroup() %>%
mutate(val2 = ifelse(val3 == 1, val1, val2)) %>%
select(-val3)
Output:
# A tibble: 6 x 3
ID val1 val2
<chr> <dbl> <dbl>
1 A 1 1
2 A 2 2
3 A 3 3
4 B 4 NA
5 B 5 2
6 B 6 3

R - Count unique/distinct values in two columns together per group

R - Count unique/distinct values in two columns together
Hi everyone. I have a panel of electoral behaviour but I am having problems to compute a new variable that would capture unique values (parties) of my two columns Party and Party2013 per group. The column Party2013 measures the vote in election 2013 and Party measures voters intentions after 2013. Everytime I try n_distinct or length I get the count of unique values in both columns separately but not as a sum.
ID Wave Party Party2013
1 1 A A
1 2 A NA
1 3 B NA
1 4 B NA
Based on the example above I normally get the count of 3 instead of desired 2.
I´ve tried following commands but got only the number of separate unique values:
data %>% group_by(ID) %>% distinct(Party, Party2013, .keep_all = TRUE) %> dplyr::summarise(Party_Party2013 = n())
or
ddply(data, .(ID), mutate, count = length(unique(Party, Party2013)))
The expected outcome would as follows:
ID Wave Party Party2013 Count
1 1 A A 2
1 2 A NA 2
1 3 B NA 2
1 4 B NA 2
2 1 A C 3
2 2 B NA 3
2 3 B NA 3
2 4 B NA 3
I would very much appreciate any advice on how to count the overall number of unique parties across the two columns per group and not the number of distinct values per each one. Thanks.
You can subset the data from cur_data() and unlist the data to get a vector. Use n_distinct to count number of unique values.
library(dplyr)
df %>%
group_by(ID) %>%
mutate(Count = n_distinct(unlist(select(cur_data(),
Party, Party2013)), na.rm = TRUE)) %>%
ungroup
# ID Wave Party Party2013 Count
# <int> <int> <chr> <chr> <int>
#1 1 1 A A 2
#2 1 2 A NA 2
#3 1 3 B NA 2
#4 1 4 B NA 2
#5 2 1 A C 3
#6 2 2 B NA 3
#7 2 3 B NA 3
#8 2 4 B NA 3
data
It is easier to help if you provide data in a reproducible format
df <- structure(list(ID = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L), Wave = c(1L,
2L, 3L, 4L, 1L, 2L, 3L, 4L), Party = c("A", "A", "B", "B", "A",
"B", "B", "B"), Party2013 = c("A", NA, NA, NA, "C", NA, NA, NA
)), class = "data.frame", row.names = c(NA, -8L))
In situations like this I always like to simplify the problem and change the data into the long format since it is easier to solve problems like this if all of your values are in one column. With pivot_longer() you can also use the argument values_drop_na = TRUE to drop NAs which were counted in your example:
library(tidyr)
library(dplyr)
data <- read.table(text =
"ID Wave Party Party2013
1 1 A A
1 2 A NA
1 3 B NA
1 4 B NA
2 1 A C
2 2 B NA
2 3 B NA
2 4 B NA", header = TRUE)
data %>% pivot_longer(cols = starts_with("Party"), values_drop_na = TRUE) %>% group_by(ID) %>%
summarise(Count = n_distinct(value)) %>% merge(data, .)
#> ID Wave Party Party2013 Count
#> 1 1 1 A A 2
#> 2 1 2 A <NA> 2
#> 3 1 3 B <NA> 2
#> 4 1 4 B <NA> 2
#> 5 2 1 A C 3
#> 6 2 2 B <NA> 3
#> 7 2 3 B <NA> 3
#> 8 2 4 B <NA> 3
Created on 2021-08-30 by the reprex package (v2.0.1)
You can also and this way:
library(dplyr)
data <- read.table(text =
"ID Wave Party Party2013
1 1 A A
1 2 A NA
1 3 B NA
1 4 B NA
2 1 A C
2 2 B NA
2 3 B NA
2 4 B NA", header = TRUE)
data %>%
group_by(ID) %>%
mutate(Count = paste(Party, Party2013) %>%
unique %>% length() %>%
rep(length(Party)))
output
# A tibble: 8 x 5
# Groups: ID [2]
ID Wave Party Party2013 Count
<int> <int> <chr> <chr> <int>
1 1 1 A A 3
2 1 2 A NA 3
3 1 3 B NA 3
4 1 4 B NA 3
5 2 1 A C 2
6 2 2 B NA 2
7 2 3 B NA 2
8 2 4 B NA 2

How to conditionally count and record if a sample appears in rows of another dataset?

I have a genetic dataset of IDs (dataset1) and a dataset of IDs which interact with each other (dataset2). I am trying to count IDs in dataset1 which appear in either of 2 interaction columns in dataset2 and also record which are the interacting/matching IDs in a 3rd column.
Dataset1:
ID
1
2
3
Dataset2:
Interactor1 Interactor2
1 5
2 3
1 10
Output:
ID InteractionCount Interactors
1 2 5, 10
2 1 3
3 1 2
So the output contains all IDs of dataset1 and a count of those IDs also appear in either column 1 or 2 of dataset2, and if it did appear it also stores which ID numbers in dataset2 it interacts with.
I have a biology background, so have guessed at approaching this, so far I've managed to use merge() and setDT(mergeddata)[, .N, by=ID] to try to count the dataset1 IDs which appear in dataset2, but I'm not sure if this is the right approach to be able to add in the creation of the column storing the interacting IDs. Any help on possible functions which can store matched IDs in a 3rd column would be appreciated.
Input data:
dput(dataset1)
structure(list(ID = 1:3), row.names = c(NA, -3L), class = c("data.table",
"data.frame"))
dput(dataset2)
structure(list(Interactor1 = c(1L, 2L, 1L), Interactor2 = c(5L,
3L, 10L)), row.names = c(NA, -3L), class = c("data.table", "data.frame"
))
Here is an option using data.table:
x <- names(DT2)
cols <- c("InteractionCount", "Interactors")
#ensure that the pairs are ordered for each row and there are no duplicated pairs
DT2 <- setkeyv(unique(DT2[,(x) := .(pmin(i1, i2), pmax(i1, i2))]), x)
#for each ID find the neighbours linked to it
neighbours <- rbindlist(list(DT2[, .(.N, toString(i2)), i1],
DT2[, .(.N, toString(i1)), i2]), use.names=FALSE)
setnames(neighbours, names(neighbours), c("ID", cols))
#update dataset1 using the above data
dataset1[, (cols) := neighbours[dataset1, on=.(ID), mget(cols)]]
output for dataset1:
ID InteractionCount Interactors
1: 1 2 5, 10
2: 2 1 3
3: 3 1 2
data:
library(data.table)
DT1 <- structure(list(ID = 1:3), row.names = c(NA, -3L), class = c("data.table", "data.frame"))
DT2 <- structure(list(i1 = c(1L, 2L, 1L), i2 = c(5L, 3L, 10L)), row.names = c(NA, -3L), class = c("data.table", "data.frame"))
Another data.table answer.
library(data.table)
d1 <- data.table(ID=1:3)
d2 <- data.table(I1=c(1,2,1),I2=c(5,3,10))
# first stack I1 on I2 and vice versa
Output <- d2[,.(ID=c(I1,I2),x=c(I2,I1))]
Output
# ID x
# 1: 1 5
# 2: 1 10
# 3: 2 3
# 4: 5 1
# 5: 10 1
# 6: 3 2
# then collect the desired columns
Output <- Output[ID %in% unlist(d1[(ID)])][
,.(InteractionCount=.N,
Interactors = list(x)),
by=ID]
Output
# ID InteractionCount Interactors
# 1: 1 2 5,10
# 2: 2 1 3
# 3: 3 1 2
EDIT:
If the IDs are not numeric, you can set a key on d1:
library(data.table)
d1 <- data.table(ID=c("1","2","3A"))
setkey(d1,ID)
d2 <- data.table(I1=c("1","2","1"),I2=c("5","3A","10"))
Output <- d2[,.(ID=c(I1,I2),x=c(I2,I1))]
Output
# ID x
# 1: 1 5
# 2: 1 10
# 3: 2 3A
# 4: 5 1
# 5: 10 1
# 6: 3A 2
Output <- Output[ID %in% unlist(d1[(ID)])][
,.(InteractionCount=.N,
Interactors = list(x)),
by=ID]
Output
# ID InteractionCount Interactors
# 1: 1 2 5,10
# 2: 2 1 3A
# 3: 3A 1 2
Here's a solution based on the tidyverse package.
library(tidyverse)
d1 <- tibble(ID=1:3)
d2 <- tibble(Interactor1=c(1, 2, 1), Interactor2=c(5, 3, 10))
I think some of your difficulty is caused by the fact that your data is not tidy. You can read about what this means on the tidyverse homepage. Let's make d2 tidy:
d2narrow <- d2 %>% gather(key="Where", value="ID", Interactor1, Interactor2)
d2narrow
which gives:
# A tibble: 6 x 2
Where ID
<chr> <dbl>
1 Interactor1 1
2 Interactor1 2
3 Interactor1 1
4 Interactor2 5
5 Interactor2 3
6 Interactor2 10
Now getting the InteractionCounts is easy:
counts <- d2narrow %>% group_by(ID) %>% summarise(InteractionCount=n())
counts
# A tibble: 5 x 2
ID InteractionCount
<dbl> <int>
1 1 2
2 2 1
3 3 1
4 5 1
5 10 1
We can get a list of Interactor2s for each value of Interactor1 by going back to the original d2...
interactors1 <- d2 %>%
group_by(Interactor1) %>%
summarise(With1=list(unique(Interactor2))) %>%
rename(ID=Interactor1)
interactors1
# A tibble: 2 x 2
ID With1
<dbl> <list>
1 1 <dbl [2]>
2 2 <dbl [1]>
If an ID can appear in both Interactor1 and Interactor2, things get a little more fiddly. (That doesn't happen in your example, but just in case...)
interactors2 <- d2 %>% group_by(Interactor2) %>% summarise(With2=list(unique(Interactor1))) %>% rename(ID=Interactor2)
interactors <- interactors1 %>%
full_join(interactors2, by="ID") %>%
unnest(cols=c(With1, With2)) %>%
mutate(With=ifelse(is.na(With1), With2, With1)) %>%
select(-With1, -With2)
interactors <- interactors %>%
group_by(ID) %>%
summarise(Interactors=list(unique(With)))
Now you can bring everything together, and make sure you get the data only for the IDs you want:
interactors <- d1 %>% left_join(counts, by="ID") %>% left_join(interactors, by="ID")
interactors
# A tibble: 3 x 3
ID InteractionCount Interactors
<dbl> <int> <list>
1 1 2 <dbl [2]>
2 2 1 <dbl [1]>
3 3 1 <dbl [1]>
That's the data in the format you requested (one column with a list of interactors for each ID). Just to prove it:
interactors$Interactors[1]
[[1]]
[1] 5 10
But I think you might find it easier to do more with the answer if it's in tidy form:
interactors %>% unnest(cols=c(Interactors))
# A tibble: 4 x 3
ID InteractionCount Interactors
<dbl> <int> <dbl>
1 1 2 5
2 1 2 10
3 2 1 3
4 3 1 2

Summing values in R based on column value with dplyr

I have a data set that has the following information:
Subject Value1 Value2 Value3 UniqueNumber
001 1 0 1 3
002 0 1 1 2
003 1 1 1 1
If the value of UniqueNumber > 0, I would like to sum the values with dplyr for each subject from rows 1 through UniqueNumber and calculate the mean. So for Subject 001, sum = 2 and mean = .67.
total = 0;
average = 0;
for(i in 1:length(Data$Subject)){
for(j in 1:ncols(Data)){
if(Data$UniqueNumber[i] > 0){
total[i] = sum(Data[i,1:j])
average[i] = mean(Data[i,1:j])
}
}
Edit: I am only looking to sum through the number of columns listed in the 'UniqueNumber' column. So this is looping through every row and stopping at column listed in 'UniqueNumber'.
Example: Row 2 with Subject 002 should sum up the values in columns 'Value1' and 'Value2', while Row 3 with Subject 003 should only sum the value in column 'Value1'.
Not a tidyverse fan/expert, but I would try this using long format. Then, just filter by row index per group and then run any functions you want on a single column (much easier this way).
library(tidyr)
library(dplyr)
Data %>%
gather(variable, value, -Subject, -UniqueNumber) %>% # long format
group_by(Subject) %>% # group by Subject in order to get row counts
filter(row_number() <= UniqueNumber) %>% # filter by row index
summarise(Mean = mean(value), Total = sum(value)) %>% # do the calculations
ungroup()
## A tibble: 3 x 3
# Subject Mean Total
# <int> <dbl> <int>
# 1 1 0.667 2
# 2 2 0.5 1
# 3 3 1 1
A very similar way to achieve this could be filtering by the integers in the column names. The filter step comes before the group_by so it could potentially increase performance (or not?) but it is less robust as I'm assuming that the cols of interest are called "Value#"
Data %>%
gather(variable, value, -Subject, -UniqueNumber) %>% #long format
filter(as.numeric(gsub("Value", "", variable, fixed = TRUE)) <= UniqueNumber) %>% #filter
group_by(Subject) %>% # group by Subject
summarise(Mean = mean(value), Total = sum(value)) %>% # do the calculations
ungroup()
## A tibble: 3 x 3
# Subject Mean Total
# <int> <dbl> <int>
# 1 1 0.667 2
# 2 2 0.5 1
# 3 3 1 1
Just for fun, adding a data.table solution
library(data.table)
data.table(Data) %>%
melt(id = c("Subject", "UniqueNumber")) %>%
.[as.numeric(gsub("Value", "", variable, fixed = TRUE)) <= UniqueNumber,
.(Mean = round(mean(value), 3), Total = sum(value)),
by = Subject]
# Subject Mean Total
# 1: 1 0.667 2
# 2: 2 0.500 1
# 3: 3 1.000 1
Here is another method that uses tidyr::nest to collect the Values columns into a list so that we can iterate through the table with map2. In each row, we select the correct values from the Values list-col and take the sum or mean respectively.
library(tidyverse)
tbl <- read_table2(
"Subject Value1 Value2 Value3 UniqueNumber
001 1 0 1 3
002 0 1 1 2
003 1 1 1 1"
)
tbl %>%
filter(UniqueNumber > 0) %>%
nest(starts_with("Value"), .key = "Values") %>%
mutate(
sum = map2_dbl(UniqueNumber, Values, ~ sum(.y[1:.x], na.rm = TRUE)),
mean = map2_dbl(UniqueNumber, Values, ~ mean(as.numeric(.y[1:.x], na.rm = TRUE))),
)
#> # A tibble: 3 x 5
#> Subject UniqueNumber Values sum mean
#> <chr> <dbl> <list> <dbl> <dbl>
#> 1 001 3 <tibble [1 × 3]> 2 0.667
#> 2 002 2 <tibble [1 × 3]> 1 0.5
#> 3 003 1 <tibble [1 × 3]> 1 1
Created on 2019-02-14 by the reprex package (v0.2.1)
Check this solution:
df %>%
gather(key, val, Value1:Value3) %>%
group_by(Subject) %>%
mutate(
Sum = sum(val[c(1:(UniqueNumber[1]))]),
Mean = mean(val[c(1:(UniqueNumber[1]))]),
) %>%
spread(key, val)
Output:
Subject UniqueNumber Sum Mean Value1 Value2 Value3
<chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 001 3 2 0.667 1 0 1
2 002 2 1 0.5 0 1 1
3 003 1 1 1 1 1 1
OP might be interested only for dplyr solution but for comparison purposes and for future readers a base R option using mapply
cols <- grep("^Value", names(df))
cbind(df, t(mapply(function(x, y) {
if (y > 0) {
vals = as.numeric(df[x, cols[1:y]])
c(Sum = sum(vals, na.rm = TRUE), Mean = mean(vals, na.rm = TRUE))
}
else
c(0, 0)
},1:nrow(df), df$UniqueNumber)))
# Subject Value1 Value2 Value3 UniqueNumber Sum Mean
#1 1 1 0 1 3 2 0.667
#2 2 0 1 1 2 1 0.500
#3 3 1 1 1 1 1 1.000
Here we subset each row based on its respective UniqueNumber and then calculate it's sum and mean if the UniqueNumber value is greater than 0 or else return only 0.
A solution that uses purrr::map_df(which is from the same author as dplyr).
library(dplyr)
library(purrr)
l_dat <- split(dat, dat$Subject) # first we need to split in a list
map_df(l_dat, function(x) {
n_cols <- x$UniqueNumber # finds the number of columns
x <- as.numeric(x[2:(n_cols+1)]) # subsets x and converts to numeric
mean(x, na.rm=T) # mean to be returned
})
# output:
# # A tibble: 1 x 3
# `1` `2` `3`
# <dbl> <dbl> <dbl>
# 1 0.667 0.5 1
Another option (output format closer to a dplyr solution):
map_df(l_dat, function(x) {
n_cols <- x$UniqueNumber
id <- x$Subject
x <- as.numeric(x[2:(n_cols+1)])
tibble(id=id, mean_values=mean(x, na.rm=T))
})
# # A tibble: 3 x 2
# id mean_values
# <int> <dbl>
# 1 1 0.667
# 2 2 0.5
# 3 3 1
Just as an example I added a sum() then divided by length(x)-1:
map_df(l_dat, function(x) {
n_cols <- x$UniqueNumber
id <- x$Subject
x <- as.numeric(x[2:(n_cols+1)])
tibble(id=id,
mean_values=sum(x, na.rm=T)/(length(x)-1)) # change here
})
# # A tibble: 3 x 2
# id mean_values
# <int> <dbl>
# 1 1 1.
# 2 2 1.
# 3 3 Inf #beware of this case where you end up dividing by 0
Data:
tt <- "Subject Value1 Value2 Value3 UniqueNumber
001 1 0 1 3
002 0 1 1 2
003 1 1 1 1"
dat <- read.table(text=tt, header=T)
I think the easiest way is to set to NA the zeros that really should be NA, then use rowSums and rowMeans on the appropriate subset of columns.
Data[2:4][(col(dat[2:4])>dat[[5]])] <- NA
Data
# Subject Value1 Value2 Value3 UniqueNumber
# 1 1 1 0 1 3
# 2 2 0 1 NA 2
# 3 3 1 NA NA 1
library(dplyr)
Data%>%
mutate(sum = rowSums(.[2:4], na.rm = TRUE),
mean = rowMeans(.[2:4], na.rm = TRUE))
# Subject Value1 Value2 Value3 UniqueNumber sum mean
# 1 1 1 0 1 3 2 0.6666667
# 2 2 0 1 NA 2 1 0.5000000
# 3 3 1 NA NA 1 1 1.0000000
or transform(Data, sum = rowSums(Data[2:4],na.rm = TRUE), mean = rowMeans(Data[2:4],na.rm = TRUE)) to stay in base R.
data
Data <- structure(
list(Subject = 1:3,
Value1 = c(1L, 0L, 1L),
Value2 = c(0L, 1L, NA),
Value3 = c(1L, NA, NA),
UniqueNumber = c(3L, 2L, 1L)),
.Names = c("Subject","Value1", "Value2", "Value3", "UniqueNumber"),
row.names = c(NA, 3L), class = "data.frame")

Resources