Merging columns and removing NA - r

I have a data frame:
A<- c(NA, 1, 2, NA, 3, NA)
R<- c(2, 1, 2, 1, NA, 1)
C<- c(rep ("B",3), rep ("D", 3))
data1<-data.frame (A,R,C)
data1
And I wan to merge column A and R, to have a data frame like data2
AR<- c(2, 1, 2, 1, 3, 1)
C<- c(rep ("B",3), rep ("D", 3))
data2<-data.frame (AR,C)
data2
Do you know how I can do that?

You might want to consider what happens if "A" and "R" have different values, but this should work:
data2 <- with(data1, data.frame(AR=ifelse(is.na(A), R, A), C=C))

Related

Count co-occurrences between elements in one column based on second column, and count only if unequal in third column

I want to count how often each pairwise combination of unique elements in column c in data frame df co-occurs on the elements of column a, but with the addition that co-occurrences are only counted if the respective values in column b are unequal, i.e., conditional on a non-match in column b
a <- c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,4,4,4,4,4)
b <- c(1,1,2,2,2,1,1,2,2,3,3,3,3,1,1,1,2,2,2,4)
c <- c(1,2,1,2,3,2,3,1,2,1,1,2,3,1,2,1,1,2,4,1)
df <- as.data.frame(cbind(a,b,c))
Without considering column b I could do the following to retain for each pair of elements of column c, on how many elements of a they co-occur
df <- unique(df[,c(1,3)])
df <- merge(df, df, by = "a")
df$count <- 1
df <- aggregate(count ~ ., df[, c(2:4)], sum)
df <- df[df$c.x != df$c.y,]
With the additional condition of a non-match in b, there is only one difference: elements 2 and 4 of column c both co-occur on element 4 of column a, but have the same value in b and should therefore not be counted to end up with:
c.x <- c(2,3,4,1,3,1,2,1)
c.y <- c(1,1,1,2,2,3,3,4)
count <- c(4,3,1,4,3,3,3,1)
result <- as.data.frame(cbind(c.x,c.y,count))
As the original data set is large (> 1,000,000 observations), I welcome fast solutions, i.e., without using loops or merges. Usually, I create co-occurrence matrices from three-column data frames using sparseMatrix()
I'm not sure from your description if this is what you had in mind, nor how fast this would turn out to be, but here is an approach with purrr:
library(purrr)
split(df, c) %>%
combn(2, simplify = F) %>%
set_names(map(., ~ paste(names(.x), collapse = "_"))) %>%
map_int(~ merge(.x[[1]], .x[[2]], by = NULL) %>%
dplyr::filter(a.x == a.y && b.x != b.y) %>%
nrow())
Returns:
1_2 1_3 1_4 2_3 2_4 3_4
0 27 0 21 0 0
# Data used:
df <- structure(list(a = c(1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4), b = c(1, 1, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 1, 1, 1, 2, 2, 2, 4), c = c(1, 2, 1, 2, 3, 2, 3, 1, 2, 1, 1, 2, 3, 1, 2, 1, 1, 2, 4, 1)), class = "data.frame", row.names = c(NA, -20L))

Adding rows to make a full long dataset for longitudinal data analysis

I am working with a long-format longitudinal dataset where each person has 1, 2 or 3 time points. In order to perform certain analyses I need to make sure that each person has the same number of rows even if it consists of NAs because they did not complete the certain time point.
Here is a sample of the data before adding the rows:
structure(list(Values = c(23, 24, 45, 12, 34, 23), P_ID = c(1,
1, 2, 2, 2, 3), Event_code = c(1, 2, 1, 2, 3, 1), Site_code = c(1,
1, 3, 3, 3, 1)), class = "data.frame", row.names = c(NA, -6L))
This is the data I aim to get after adding the relevant rows:
structure(list(Values = c(23, 24, NA, 45, 12, 34, 23, NA, NA),
P_ID = c(1, 1, 1, 2, 2, 2, 3, 3, 3), Event_code = c(1, 2,
3, 1, 2, 3, 1, 2, 3), Site_code = c(1, 1, 1, 3, 3, 3, 1,
1, 1)), class = "data.frame", row.names = c(NA, -9L))
I want to come up with code that would automatically add rows to the dataset conditionally on whether the participant has had 1, 2 or 3 visits. Ideally it would make rest of data all NAs while copying Participant_ID and site_code but if not possible I would be satisfied just with creating the right number of rows.
We could use fill after doing a complete
library(dplyr)
library(tidyr)
ExpandedDataset %>%
complete(P_ID, Event_code) %>%
fill(Site_code)
I came with quite a long code, but you could group it in a function and make it easier:
Here's your dataframe:
df <- data.frame(ID = c(rep("P1", 2), rep("P2", 3), "P3"),
Event = c("baseline", "visit 2", "baseline", "visit 2", "visit 3", "baseline"),
Event_code = c(1, 2, 1, 2, 3, 1),
Site_code = c(1, 1, 2, 2, 2, 1))
How many records you have per ID?
values <- summary(df$ID)
What is the maximum number of records for a single patient?
target <- max(values)
Which specific patients have less records than the maximum?
uncompliant <- names(which(values<target))
And how many records do you have for those patients who have missing information?
rowcount <- values[which(values<target)]
So now, let's create the vectors of the data frame we will add to your original one. First, IDs:
IDs <- vector()
for(i in 1:length(rowcount)){
y <- rep(uncompliant[i], target - rowcount[i])
IDs <- c(IDs, y)
}
And now, the sitecodes:
SC <- vector()
for(i in 1:length(rowcount)){
y <- rep(unique(df$Site_code[which(df$ID == uncompliant[i])]), target - rowcount[i])
SC <- c(SC, y)
}
Finally, a data frame with the values we will introduce:
introduce <- data.frame(ID = IDs, Event = rep(NA, length(IDs)),
Event_code = rep(NA, length(IDs)),
Site_code = SC)
Combine the original dataframe with the new values to be added and sort it so it looks nice:
final <- as.data.frame(rbind(df, introduce))
final <- final[order(v$ID), ]

count non-NA values and group by variable

I am trying to show how many complete observations there are per variabie ID without using the complete.cases package or any other package.
If I use na.omit to filter out the NA values, I will lose all of the IDs which might have ZERO complete cases.
In the end, I'd like a frequency table with two columns: ID and Number of Complete Observations
> length(unique(data$ID))
[1] 332
> head(data)
ID value
1 1 NA
2 1 NA
3 1 NA
4 1 NA
5 1 NA
6 1 NA
> dim(data)
[1] 772087 2
When I try to create my own function z - which counts non-NA values and apply that in the aggregate() function, the IDs with zero complete observations are left out. I should be left with 332 rows, not 323. How does one resolve this using base functions?
z <- function(x){
sum(!is.na(x))
}
aggregate(value ~ ID, data = data , FUN = "z")
> nrow(aggregate(isna ~ ID, data = data , FUN = "z"))
[1] 323
One of the ways to do this is using table:
df2 <- table(df$Id, !is.na(df$value))[,2]
data.frame(ID = names(df2), value = df2)
Data
structure(list(Id = c(1, 1, 1, 2, 2, 3, 3, 3, 3, 3, 4, 4), value = c(NA,
1, 1, 2, 2, NA, 3, NA, 3, 3, 4, 4)), .Names = c("Id", "value"
), row.names = c(NA, -12L), class = "data.frame")
Base R you can use your utility function like this:
stack(by(data$value, data$ID, FUN=function(x) sum(!is.na(x))))
you can directly use table for this purpose. Below is the sample code:
df1 <- structure(list(Id = c(1, 1, 1, 2, 2, 3, 3, 3, 3, 3, 4, 4), value = c(2,
1, 1, NA, NA, NA, 3, NA, 3, 3, 4, 4)), .Names = c("Id", "value"
), row.names = c(NA, -12L), class = "data.frame")
df2 <- as.data.frame.matrix(with(df1, table(Id, value)))
resultDf <- data.frame(Id=row.names(df2), count=apply(df2, 1, sum))
resultDf
The code makes a table of id and value. Then it just sums the non-na values from the table. Hope this is easy to understand and helps.

Replace NA in a variable by their value with other observation

I have a dataframe like this one :
df <- data.frame(A = c(1, 2, 3, 4, 2, 2, 1, 5, 3),
B = c("a", "b", "c", "d", NA, "b", NA, NA, NA ))
I want ro remplace this dataframe by the vlue recuperated in the other observation.
For example, in the variable A, for 1 correspond "a" in the variable B; so NA should be remplaced by a.
But for 5, we can't conclude so I keep NA.
How could I do this, I'm stuck.
Thank you.
You could try
df$B <- with(df, ave(as.character(B), A, FUN= function(x)
ifelse(is.na(x), na.omit(x), x)))
Or using data.table
library(data.table)
setDT(df)[ ,B:=ifelse(is.na(B), na.omit(B), B) , A]
Or a variant would be
setDT(df)[,B:=if(any(is.na(B))) unique(na.omit(B)), A][]

Replace NAs in one vector with sequential elements of another vector

I'd like to replace NA elements of a vector with elements from a sequence, for example:
x <- c(1, NA, 5, NA, NA, 2, 12, NA)
replace.seq <- -1:-4 # Can assume length(replace.seq) == sum(is.na(x))
goal <- c(1, -1, 5, -2, -3, 2, 12, -4)
What's an efficient way to do this? I'd prefer to avoid sorting x.
Per #akrun:
x[is.na(x)] <- replace.seq
You can use replace:
x <- replace(x, is.na(x), replace.seq)

Resources