friendship network identification in R - r

I want to identify networks where all people in the same network directly or indirectly connected through friendship nominations while no students from different networks are connected.
I am using the Add Health data. Each student nominates upto 10 friends.
Say, sample data may look like this:
ID FID_1 FID_2 FID_3 FID_4 FID_5 FID_6 FID_7 FID_8 FID_9 FID_10
1 2 6 7 9 10 NA NA NA NA NA
2 5 9 12 45 13 90 87 6 NA NA
3 1 2 4 7 8 9 10 14 16 18
100 110 120 122 125 169 178 190 200 500 520
500 100 110 122 125 169 178 190 200 500 520
700 800 789 900 NA NA NA NA NA NA NA
1000 789 2000 820 900 NA NA NA NA NA NA
There are around 85,000 individuals. Could anyone please tell me how I can get network ID?
So, I would like the data to look the following
ID network_ID ID network_ID
1 1 700 3
2 1 789 3
3 1 800 3
4 1 820 3
5 1 900 3
6 1 1000 3
7 1 2000 3
8 1
9 1
10 1
12 1
13 1
14 1
16 1
18 1
90 1
87 1
100 2
110 2
120 2
122 2
125 2
169 2
178 2
190 2
200 2
500 2
520 2
So, everyone directly or indirectly connected to ID 1 belong to network 1. 2 is a friend of 1. So, everyone directly or indirectly connected to 2 are also in 1's network and so on. 700 is not connected to 1 or friend of 1 or friend of friend of 1 and so on. Thus 700 is in a different network, which is network 3.
Any help will be much appreciated...

Update
library(igraph)
library(dplyr)
library(data.table)
setDT(df) %>%
melt(id.var = "ID", variable.name = "FID", value.name = "ID2") %>%
na.omit() %>%
setcolorder(c("ID", "ID2", "FID")) %>%
graph_from_data_frame() %>%
components() %>%
membership() %>%
stack() %>%
setNames(c("Network_ID", "ID")) %>%
rev() %>%
type.convert(as.is = TRUE) %>%
arrange(Network_ID, ID)
gives
ID Network_ID
1 1 1
2 2 1
3 3 1
4 4 1
5 5 1
6 6 1
7 7 1
8 8 1
9 9 1
10 10 1
11 12 1
12 13 1
13 14 1
14 16 1
15 18 1
16 45 1
17 87 1
18 90 1
19 100 2
20 110 2
21 120 2
22 122 2
23 125 2
24 169 2
25 178 2
26 190 2
27 200 2
28 500 2
29 520 2
30 700 3
31 789 3
32 800 3
33 820 3
34 900 3
35 1000 3
36 2000 3
Data
> dput(df)
structure(list(ID = c(1L, 2L, 3L, 100L, 500L, 700L, 1000L), FID_1 = c(2L,
5L, 1L, 110L, 100L, 800L, 789L), FID_2 = c(6L, 9L, 2L, 120L,
110L, 789L, 2000L), FID_3 = c(7L, 12L, 4L, 122L, 122L, 900L,
820L), FID_4 = c(9L, 45L, 7L, 125L, 125L, NA, 900L), FID_5 = c(10L,
13L, 8L, 169L, 169L, NA, NA), FID_6 = c(NA, 90L, 9L, 178L, 178L,
NA, NA), FID_7 = c(NA, 87L, 10L, 190L, 190L, NA, NA), FID_8 = c(NA,
6L, 14L, 200L, 200L, NA, NA), FID_9 = c(NA, NA, 16L, 500L, 500L,
NA, NA), FID_10 = c(NA, NA, 18L, 520L, 520L, NA, NA)), class = "data.frame", row.names = c(NA,
-7L))
Are you looking for something like this?
library(data.table)
library(dplyr)
setDT(df) %>%
melt(id.var = "ID", variable.name = "FID", value.name = "ID2") %>%
na.omit() %>%
setcolorder(c("ID", "ID2", "FID")) %>%
graph_from_data_frame() %>%
plot(edge.label = E(.)$FID)
Data
structure(list(ID = 1:3, FID_1 = c(2L, 5L, 1L), FID_2 = c(6L,
9L, 2L), FID_3 = c(7L, 12L, 4L), FID_4 = c(9L, 45L, 7L), FID_5 = c(10L,
12L, 8L), FID_6 = c(NA, 90L, 9L), FID_7 = c(NA, 87L, 10L), FID_8 = c(NA,
6L, 14L), FID_9 = c(NA, NA, 16L), FID_10 = c(NA, NA, 18L)), class = "data.frame", row.names = c(NA,
-3L))

Related

Combine rows with matched and unmatched number with additional column association

X1 Y2 X3 Y4
2 50 2 60
2 43 3 55
4 80 5 34
5 30 5 59
6 69 5 87
6 56 6 85
8 22
9 25
I have 4 column x as mass values (decimal) and y as intensity (decimal) as unequal column pair, I want to combine similar mass value (x) with highest intensity (y) and unmatched mass (x) as zero.
X1 Y2 X3 Y4
2 50 2 60
3 00 3 55
4 80 4 00
5 30 5 87
6 69 6 85
8 00 8 22
9 00 9 25
How can I do the operation? Any help will be greatly appreciated.
It is a bit messy, but makes the expected job.
library(dplyr)
a <- na.omit(table[,1:2]) %>% group_by(X1) %>% summarize(Y2=max(Y2))
b <- na.omit(table[,3:4]) %>% group_by(X3) %>% summarize(Y4=max(Y4))
out <- merge(a,b,by.x="X1",by.y="X3",all=T)
out[is.na(out)] <- 0
out
gives,
X1 Y2 Y4
1 2 50 60
2 3 0 55
3 4 80 0
4 5 30 87
5 6 69 85
6 8 0 22
7 9 0 25
Data:
table <-structure(list(X1 = c(2L, 2L, 4L, 5L, 6L, 6L, NA, NA), Y2 = c(50L,
43L, 80L, 30L, 69L, 56L, NA, NA), X3 = c(2L, 3L, 5L, 5L, 5L,
6L, 8L, 9L), Y4 = c(60L, 55L, 34L, 59L, 87L, 85L, 22L, 25L)), class = "data.frame", row.names = c(NA,
-8L))

Is there R function for removing specific column condition

Hello all my df looks like
PID V1
123 1
123 2
123 3
111 1
111 2
111 1
122 3
122 1
122 1
333 1
333 4
333 2
I want to delete rows contains 1 and 2 event alone for the PID
and expected output
PID V1
123 1
123 2
123 3
122 3
122 1
122 1
333 1
333 4
333 2
You can do this in base R :
subset(df, !ave(V1 %in% 1:2, PID, FUN = all))
# PID V1
#1 123 1
#2 123 2
#3 123 3
#7 122 3
#8 122 1
#9 122 1
#10 333 1
#11 333 4
#12 333 2
dplyr
library(dplyr)
df %>% group_by(PID) %>% filter(!all(V1 %in% 1:2))
or data.table :
library(data.table)
setDT(df)[, .SD[!all(V1 %in% 1:2)], PID]
The logic of all of them is the same. Remove groups (PID) who have only 1 and 2 in V1 column.
data
df <- structure(list(PID = c(123L, 123L, 123L, 111L, 111L, 111L, 122L,
122L, 122L, 333L, 333L, 333L), V1 = c(1L, 2L, 3L, 1L, 2L, 1L,
3L, 1L, 1L, 1L, 4L, 2L)), class = "data.frame", row.names = c(NA, -12L))

How to subtract one row from multiple rows by group, for data set with multiple columns in R?

I would like to learn how to subtract one row from multiple rows by group, and save the results as a data table/matrix in R. For example, take the following data frame:
data.frame("patient" = c("a","a","a", "b","b","b","c","c","c"), "Time" = c(1,2,3), "Measure 1" = sample(1:100,size = 9,replace = TRUE), "Measure 2" = sample(1:100,size = 9,replace = TRUE), "Measure 3" = sample(1:100,size = 9,replace = TRUE))
patient Time Measure.1 Measure.2 Measure.3
1 a 1 19 5 75
2 a 2 64 20 74
3 a 3 40 4 78
4 b 1 80 91 80
5 b 2 48 31 73
6 b 3 10 5 4
7 c 1 30 67 55
8 c 2 24 13 90
9 c 3 45 31 88
For each patient, I would like to subtract the row where Time == 1 from all rows associated with that patient. The result would be:
patient Time Measure.1 Measure.2 Measure.3
1 a 1 0 0 0
2 a 2 45 15 -1
3 a 3 21 -1 3
4 b 1 0 0 0
5 b 2 -32 -60 -5
6 b 3 -70 -86 -76
7 c 1 0 0 0
....
I have tried the following code using the dplyr package, but to no avail:
raw_patient<- group_by(rawdata,patient, Time)
baseline_patient <-mutate(raw_patient,cpls = raw_patient[,]- raw_patient["Time" == 0,])
As there are multiple columns, we can use mutate_at by specifying the variables in vars and then subtract the elements from those elements in each column that corresponds to 'Time' 1 after grouping by 'patient'
library(dplyr)
df1 %>%
group_by(patient) %>%
mutate_at(vars(matches("Measure")), funs(.- .[Time==1]))
# A tibble: 9 × 5
# Groups: patient [3]
# patient Time Measure.1 Measure.2 Measure.3
# <chr> <int> <int> <int> <int>
#1 a 1 0 0 0
#2 a 2 45 15 -1
#3 a 3 21 -1 3
#4 b 1 0 0 0
#5 b 2 -32 -60 -7
#6 b 3 -70 -86 -76
#7 c 1 0 0 0
#8 c 2 -6 -54 35
#9 c 3 15 -36 33
data
df1 <- structure(list(patient = c("a", "a", "a", "b", "b", "b", "c",
"c", "c"), Time = c(1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 3L), Measure.1 = c(19L,
64L, 40L, 80L, 48L, 10L, 30L, 24L, 45L), Measure.2 = c(5L, 20L,
4L, 91L, 31L, 5L, 67L, 13L, 31L), Measure.3 = c(75L, 74L, 78L,
80L, 73L, 4L, 55L, 90L, 88L)), .Names = c("patient", "Time",
"Measure.1", "Measure.2", "Measure.3"), class = "data.frame", row.names = c("1",
"2", "3", "4", "5", "6", "7", "8", "9"))

Joining and merging two dataframes with no multiplicity

The title is really confusing I know, but I have to explain my problem.
I have 2 datasets where first contains frequency of citations by years for every article which is represented by pmid. It look like this:
pmid year freq
1 14561399 2011 1
2 14561399 2012 3
3 18511332 2010 1
4 21193046 2012 2
5 21193046 2013 2
6 14561399 2013 1
7 18511332 2011 1
8 18511332 2012 3
9 14561399 2014 1
10 16533158 2013 2
and the second contains article features and looks like this:
pmid title_char title_wrds
1 20711763 75 9
2 20734175 109 12
3 20058113 93 13
4 20625865 142 17
5 20517661 103 12
6 20195930 128 16
Both dataset as you can see contains "pmid", which is parameter by which I need to "merge" or "join" this dataset. That's not a problem, it could be done with just merge() function or with dplyr package. But when I do this, results look like this:
pmid title_char title_wrds year freq
1 184 77 10 2010 1
2 406 142 20 2008 1
3 407 110 16 2008 1
4 407 110 16 2003 1
5 408 79 10 1998 1
6 450 58 7 2012 2
7 450 58 7 2009 1
My problem is - as you can see for example lines 2 and 3 - these two lines contains the same article (the same pmid, same features) but it is in the two line because of the year of citation.
pmid title_char title_wrds year freq
3 407 110 16 2008 1
4 407 110 16 2003 1
And I want something like this:
pmid title_char title_wrds year2008Freq year2003Freq
1 407 110 16 1 1
That is 1 line per 1 article.
You could try
library(reshape2)
res <- dcast(dfN, ...~paste0('year', year, 'Freq'), value.var='freq')
data
dfN <- structure(list(pmid = c(184L, 406L, 407L, 407L, 408L, 450L, 450L
), title_char = c(77L, 142L, 110L, 110L, 79L, 58L, 58L),
title_wrds = c(10L,
20L, 16L, 16L, 10L, 7L, 7L), year = c(2010L, 2008L, 2008L, 2003L,
1998L, 2012L, 2009L), freq = c(1L, 1L, 1L, 1L, 1L, 2L, 1L)),
.Names = c("pmid",
"title_char", "title_wrds", "year", "freq"), class = "data.frame",
row.names = c("1", "2", "3", "4", "5", "6", "7"))

How to select data that have complete cases of a certain column?

I'm trying to get a data frame (just.samples.with.shoulder.values, say) contain only samples that have non-NA values. I've tried to accomplish this using the complete.cases function, but I imagine that I'm doing something wrong syntactically below:
data <- structure(list(Sample = 1:14, Head = c(1L, 0L, NA, 1L, 1L, 1L,
0L, 0L, 1L, 1L, 1L, 1L, 0L, 1L), Shoulders = c(13L, 14L, NA,
18L, 10L, 24L, 53L, NA, 86L, 9L, 65L, 87L, 54L, 36L), Knees = c(1L,
1L, NA, 1L, 1L, 2L, 3L, 2L, 1L, NA, 2L, 3L, 4L, 3L), Toes = c(324L,
5L, NA, NA, 5L, 67L, 785L, 42562L, 554L, 456L, 7L, NA, 54L, NA
)), .Names = c("Sample", "Head", "Shoulders", "Knees", "Toes"
), class = "data.frame", row.names = c(NA, -14L))
just.samples.with.shoulder.values <- data[complete.cases(data[,"Shoulders"])]
print(just.samples.with.shoulder.values)
I would also be interested to know whether some other route (using subset(), say) is a wiser idea. Thanks so much for the help!
You can try complete.cases too which will return a logical vector which allow to subset the data by Shoulders
data[complete.cases(data$Shoulders), ]
# Sample Head Shoulders Knees Toes
# 1 1 1 13 1 324
# 2 2 0 14 1 5
# 4 4 1 18 1 NA
# 5 5 1 10 1 5
# 6 6 1 24 2 67
# 7 7 0 53 3 785
# 9 9 1 86 1 554
# 10 10 1 9 NA 456
# 11 11 1 65 2 7
# 12 12 1 87 3 NA
# 13 13 0 54 4 54
# 14 14 1 36 3 NA
You could try using is.na:
data[!is.na(data["Shoulders"]),]
Sample Head Shoulders Knees Toes
1 1 1 13 1 324
2 2 0 14 1 5
4 4 1 18 1 NA
5 5 1 10 1 5
6 6 1 24 2 67
7 7 0 53 3 785
9 9 1 86 1 554
10 10 1 9 NA 456
11 11 1 65 2 7
12 12 1 87 3 NA
13 13 0 54 4 54
14 14 1 36 3 NA
There is a subtle difference between using is.na and complete.cases.
is.na will remove actual na values whereas the objective here is to only control for a variable not deal with missing values/na's those which could be legitimate data points

Resources