Caculating probability using normal distribution. and t-distribution in R - r

I have this sample:
x=c(92L, 9L, 38L, 43L, 74L, 16L, 75L, 55L, 39L, 77L, 76L, 52L,
100L, 85L, 62L, 60L, 49L, 28L, 6L, 27L, 63L, 22L, 23L, 99L, 61L,
25L, 19L, 48L, 91L, 57L, 97L, 84L, 31L, 87L, 1L, 21L, 30L, 41L,
13L, 72L, 68L, 95L, 47L, 11L, 24L, 58L, 18L, 67L, 33L, 8L, 50L,
4L, 40L, 12L, 73L, 78L, 86L, 69L, 44L, 83L, 94L, 65L, 37L, 70L,
54L, 46L, 15L, 53L, 89L, 98L, 90L, 3L, 14L, 17L, 42L, 45L, 79L,
20L, 32L, 34L, 64L, 88L, 81L, 96L, 59L, 71L, 56L, 26L, 51L, 29L,
80L, 7L, 36L, 93L, 82L, 35L, 5L, 2L, 10L, 66L)
I want to calculate this probability: P(x) > Mean(x) + 3 assuming that data have normal distribution.
So I do this: mean(x) = 50.5 ; sd(x)=29.01
I generate the density distribution and calculate my probability, which now is:
P(x) > 53.5
pnorm(53.5, mean=mean(x), sd=sd(x), lower.tail=FALSE)
If I want calculating using Standard Distribution:
P(x)>(53.5) = P(z=(x-mean(x)/sd(x))) > ((53.5 - 50.5)/29.01) = P(z)>(3/29.01)
pnorm(3/29.01149, mean=0, sd=1, lower.tail=FALSE)
But when I want to use the T-Student Distribution, how can I proceed?

It is more legitimate to use t distribution here, as standard error is estimated from data.
pt(3 / sd(x), df = length(x) - 1, lower.tail = FALSE)
# [1] 0.4589245
We have length(x) number of data, but also estimate 1 parameter (standard error), so the degree of freedom for t-distribution is length(x) - 1.
There is not much difference compared with using normal distribution, though, given that length(x) is 100 (which is large enough):
pnorm(3 / sd(x), lower.tail = FALSE)
# [1] 0.4588199

Related

Convert pixel values stored in text file to image

I've been trying to find a way to convert text files with pixels values into images (no matter the format) in R but I couldn't find a way to do it.
I found solutions for MatLab and Python, for example.
I have a file with 520 x 640 pixels with values from 0 to 255.
This is a small piece of it.
mid1al <- read.table("C:/Users/u015/Mid1_R_Al.txt", header = FALSE, sep = ";")
mid1al <- mid1al[1:20,1:20]
dput(mid1al)
structure(list(V1 = c(84L, 79L, 97L, 67L, 98L, 113L, 77L, 46L,
41L, 37L, 42L, 46L, 23L, 28L, 24L, 34L, 45L, 51L, 24L, 24L),
V2 = c(118L, 107L, 105L, 82L, 87L, 108L, 100L, 40L, 71L,
74L, 81L, 55L, 41L, 25L, 22L, 58L, 53L, 38L, 26L, 36L), V3 = c(103L,
116L, 128L, 82L, 77L, 104L, 97L, 50L, 65L, 78L, 98L, 111L,
86L, 59L, 35L, 51L, 43L, 46L, 33L, 47L), V4 = c(114L, 91L,
90L, 96L, 103L, 98L, 86L, 36L, 50L, 65L, 98L, 125L, 86L,
32L, 24L, 36L, 36L, 44L, 34L, 43L), V5 = c(68L, 70L, 85L,
85L, 100L, 111L, 61L, 12L, 42L, 70L, 103L, 103L, 45L, 27L,
18L, 27L, 32L, 43L, 51L, 41L), V6 = c(43L, 87L, 85L, 89L,
130L, 123L, 78L, 43L, 15L, 39L, 62L, 44L, 27L, 14L, 19L,
61L, 83L, 90L, 88L, 88L), V7 = c(20L, 72L, 116L, 124L, 133L,
133L, 103L, 56L, 21L, 9L, 19L, 26L, 18L, 32L, 67L, 92L, 100L,
105L, 94L, 79L), V8 = c(69L, 96L, 120L, 144L, 142L, 101L,
96L, 46L, 14L, 4L, 8L, 2L, 24L, 73L, 96L, 106L, 103L, 116L,
109L, 74L), V9 = c(118L, 122L, 134L, 135L, 133L, 98L, 57L,
20L, 5L, 5L, 2L, 14L, 51L, 89L, 117L, 95L, 103L, 93L, 104L,
77L), V10 = c(122L, 107L, 127L, 147L, 128L, 88L, 24L, 11L,
10L, 4L, 10L, 31L, 74L, 104L, 113L, 107L, 109L, 99L, 103L,
45L), V11 = c(105L, 120L, 114L, 132L, 125L, 112L, 51L, 6L,
3L, 9L, 18L, 49L, 82L, 111L, 111L, 96L, 92L, 81L, 75L, 18L
), V12 = c(98L, 104L, 103L, 126L, 147L, 128L, 61L, 26L, 2L,
9L, 18L, 50L, 105L, 103L, 101L, 98L, 74L, 53L, 18L, 1L),
V13 = c(107L, 91L, 108L, 109L, 138L, 114L, 88L, 33L, 2L,
4L, 9L, 61L, 71L, 77L, 78L, 83L, 43L, 38L, 8L, 5L), V14 = c(53L,
60L, 43L, 49L, 104L, 128L, 72L, 44L, 6L, 8L, 10L, 24L, 35L,
27L, 33L, 37L, 31L, 24L, 10L, 5L), V15 = c(13L, 16L, 11L,
27L, 62L, 78L, 73L, 30L, 8L, 7L, 31L, 66L, 66L, 33L, 13L,
27L, 16L, 18L, 12L, 7L), V16 = c(11L, 12L, 7L, 3L, 16L, 35L,
45L, 13L, 5L, 7L, 22L, 74L, 73L, 31L, 16L, 43L, 35L, 14L,
15L, 8L), V17 = c(15L, 16L, 7L, 8L, 1L, 5L, 15L, 13L, 31L,
33L, 22L, 34L, 38L, 17L, 18L, 41L, 39L, 26L, 19L, 12L), V18 = c(9L,
15L, 7L, 2L, 2L, 5L, 5L, 25L, 50L, 55L, 35L, 25L, 14L, 8L,
18L, 44L, 36L, 36L, 19L, 0L), V19 = c(15L, 16L, 4L, 6L, 4L,
6L, 22L, 45L, 59L, 48L, 56L, 58L, 52L, 30L, 22L, 46L, 41L,
50L, 23L, 7L), V20 = c(20L, 7L, 4L, 2L, 6L, 14L, 40L, 55L,
74L, 60L, 69L, 74L, 60L, 56L, 38L, 45L, 67L, 39L, 25L, 11L
)), row.names = c(NA, 20L), class = "data.frame")
Is there a way to create this image in Rstudio?

Surfaceplot with non-complete grid

I want a surface plot, but my grid is incomplete. I searched, but without success. How can I make the following work:
x = c(10L, 20L, 30L, 40L, 50L, 60L, 70L, 80L, 90L, 100L, 30L, 40L,
50L, 60L, 70L, 80L, 90L, 100L, 50L, 60L, 70L, 80L, 90L, 100L,
70L, 80L, 90L, 100L, 90L, 100L)
y = c(10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 20L, 20L,
20L, 20L, 20L, 20L, 20L, 20L, 30L, 30L, 30L, 30L, 30L, 30L,
40L, 40L, 40L, 40L, 50L, 50L)
z = c(6.093955007, 44.329214443, 149.103755156, 351.517349974,
726.51174655, 1191.039562104, 1980.245204702, 2783.308022984,
6974.563519067, 5149.396230019, 142.236259009, 321.170609648,
684.959503897, 1121.475597135, 1878.334840961, 2683.116309688,
4159.60732066, 5294.774284119, 687.430547359, 1119.765405426,
1876.57337196, 2685.951176024, 3945.696884503, 5152.986796572,
1870.78724464, 2677.744176903, 3951.928931107, 5160.295960254,
3957.503273558, 5147.237754092)
# OK but not a surface plot
scatterplot3d::scatterplot3d(x, y, z,
color = "blue", pch = 19,
type = "h",
main = "",
xlab = "x",
ylab = "y",
zlab = "z",
angle = 35,
grid = FALSE)
# Not working:
M <- plot3D::mesh(x, y, z)
R <- with (M, sqrt(x^2 + y^2 +z^2))
p <- sin(2*R)/(R+1e-3)
plot3D::slice3D(x, y, z, colvar = p,
xs = 0, ys = c(-4, 0, 4), zs = NULL)
plot3D::isosurf3D(x, y, z, colvar = p, level = 0, col = "red")
For this kind of problem I can recommend the deldir-package which does "Delaunay triangulation and Dirichlet tessellation". From this you calculate the spacial triangles that give the surface plot.
The rgl-package gives the possibility to add the triangles to your scatterplot. And even better - the resulting plot is animated, so you can rotate and zoom for better overview.
x = c(10L, 20L, 30L, 40L, 50L, 60L, 70L, 80L, 90L, 100L, 30L, 40L,
50L, 60L, 70L, 80L, 90L, 100L, 50L, 60L, 70L, 80L, 90L, 100L,
70L, 80L, 90L, 100L, 90L, 100L)
y = c(10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 20L, 20L,
20L, 20L, 20L, 20L, 20L, 20L, 30L, 30L, 30L, 30L, 30L, 30L,
40L, 40L, 40L, 40L, 50L, 50L)
z = c(6.093955007, 44.329214443, 149.103755156, 351.517349974,
726.51174655, 1191.039562104, 1980.245204702, 2783.308022984,
6974.563519067, 5149.396230019, 142.236259009, 321.170609648,
684.959503897, 1121.475597135, 1878.334840961, 2683.116309688,
4159.60732066, 5294.774284119, 687.430547359, 1119.765405426,
1876.57337196, 2685.951176024, 3945.696884503, 5152.986796572,
1870.78724464, 2677.744176903, 3951.928931107, 5160.295960254,
3957.503273558, 5147.237754092)
# create spacial triangles
del <- deldir::deldir(x, y, z = z)
triangs <- do.call(rbind, triang.list(del))
# create plot
rgl::plot3d(x, y, z, size = 5, xlab = "my_x", ylab = "my_y", zlab = "my_z", col = "red")
rgl::triangles3d(triangs[, c("x", "y", "z")], col = "gray")
I hope this helps.
This is more of a hint than a complete answer:
library(plotly)
plot_ly(z = ~volcano) %>% add_surface()
is a nice way to do this kind of plots. So for your example:
x <- c(10L, 20L, 30L, 40L, 50L, 60L, 70L, 80L, 90L, 100L, 30L, 40L,
50L, 60L, 70L, 80L, 90L, 100L, 50L, 60L, 70L, 80L, 90L, 100L,
70L, 80L, 90L, 100L, 90L, 100L)
y <- c(10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 20L, 20L,
20L, 20L, 20L, 20L, 20L, 20L, 30L, 30L, 30L, 30L, 30L, 30L,
40L, 40L, 40L, 40L, 50L, 50L)
z <- c(6.093955007, 44.329214443, 149.103755156, 351.517349974,
726.51174655, 1191.039562104, 1980.245204702, 2783.308022984,
6974.563519067, 5149.396230019, 142.236259009, 321.170609648,
684.959503897, 1121.475597135, 1878.334840961, 2683.116309688,
4159.60732066, 5294.774284119, 687.430547359, 1119.765405426,
1876.57337196, 2685.951176024, 3945.696884503, 5152.986796572,
1870.78724464, 2677.744176903, 3951.928931107, 5160.295960254,
3957.503273558, 5147.237754092)
library(plotly)
m <- matrix(c(x,y,z), nrow = 3)
plot_ly(z = ~m) %>% add_surface()
produces
..this is a first step, but there's still some issues with the scaling of the x-axis. I think key to the solution is to set up the whole (sparse) matrix and then plot it.
x <- c(10L, 20L, 30L, 40L, 50L, 60L, 70L, 80L, 90L, 100L, 30L, 40L,
50L, 60L, 70L, 80L, 90L, 100L, 50L, 60L, 70L, 80L, 90L, 100L,
70L, 80L, 90L, 100L, 90L, 100L)
y <- c(10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 20L, 20L,
20L, 20L, 20L, 20L, 20L, 20L, 30L, 30L, 30L, 30L, 30L, 30L,
40L, 40L, 40L, 40L, 50L, 50L)
z <- c(6.093955007, 44.329214443, 149.103755156, 351.517349974,
726.51174655, 1191.039562104, 1980.245204702, 2783.308022984,
6974.563519067, 5149.396230019, 142.236259009, 321.170609648,
684.959503897, 1121.475597135, 1878.334840961, 2683.116309688,
4159.60732066, 5294.774284119, 687.430547359, 1119.765405426,
1876.57337196, 2685.951176024, 3945.696884503, 5152.986796572,
1870.78724464, 2677.744176903, 3951.928931107, 5160.295960254,
3957.503273558, 5147.237754092)
xx <- 1:100L
yy <- 1:100L
zz <- matrix(0, nrow = 100, ncol = 100)
for (i in 1:length(x)){
zz[x[i], y[i]] <- z[i]
}
library(plotly)
plot_ly(z = ~zz) %>% add_surface()
produces
which is basically what your data supposes..
Hope I can figure that out as well.
And hope this helps.

How to find Number of unique occurrences of a value in data-set?

I have the following piece of my data-set:
> dput(test)
structure(list(X2002.06.26 = structure(c(99L, 88L, 65L, 94L,
60L, 101L, 27L, 83L, 16L, 12L, 54L, 97L, 63L, 41L, 13L, 2L, 58L,
9L, 82L, 22L, 14L, 77L, 55L, 32L, 45L, 80L, 39L, 70L, 114L, 103L,
69L, 104L, 106L, 108L, 38L, 10L, 64L, 1L, 112L, 102L, 67L, 98L,
66L, 19L, 81L, 72L, 89L, 23L, 48L, 4L, 25L, 91L, 26L, 62L, 33L,
3L, 28L, 57L, 17L, 20L, 73L, 78L, 90L, 84L, 5L, 92L, 43L, 74L,
75L, 93L, 100L, 56L, 36L, 79L, 111L, 52L, 24L, 105L, 29L, 53L,
110L, 71L, 18L, 8L, 34L, 50L, 109L, 61L, 35L, 21L, 11L, 47L,
59L, 51L, 113L, 44L, 30L, 42L, 107L, 7L, 87L, 6L, 68L, 96L, 86L,
15L, 46L, 85L, 31L, 49L, 40L, 76L, 95L, 115L, 37L), .Label = c("BMG4388N1065",
"BMG812761002", "GB00BYMT0J19", "IE00BLS09M33", "IE00BQRQXQ92",
"US0003611052", "US0015471081", "US0025671050", "US0028962076",
"US0044981019", "US0116591092", "US01741R1023", "US0185223007",
"US01988P1084", "US0305061097", "US0311001004", "US03662Q1058",
"US0375981091", "US0383361039", "US03836W1036", "US03937C1053",
"US0396701049", "US0462241011", "US06652V2088", "US0997241064",
"US1033041013", "US1096961040", "US1170431092", "US1250711009",
"US1258961002", "US12686C1099", "US1311931042", "US1416651099",
"US1423391002", "US1431301027", "US1564311082", "US1718711062",
"US1778351056", "US2193501051", "US2289031005", "US23331A1097",
"US2537981027", "US2829141009", "US2925621052", "US2966891028",
"US3116421021", "US34354P1057", "US3498531017", "US3693851095",
"US3984331021", "US3989051095", "US4158641070", "US4222451001",
"US4285671016", "US4586653044", "US4835481031", "US5261071071",
"US5367971034", "US5463471053", "US55305B1017", "US5535301064",
"US5562691080", "US5663301068", "US5871181005", "US59001A1025",
"US6081901042", "US62914B1008", "US6517185046", "US6900701078",
"US6907684038", "US6936561009", "US7081601061", "US7132781094",
"US7234561097", "US7310681025", "US7415034039", "US7496851038",
"US7549071030", "US7595091023", "US76009N1000", "US7703231032",
"US7811821005", "US7835491082", "US8081941044", "US8308791024",
"US83088M1027", "US83545G1022", "US8354951027", "US8528572006",
"US8545021011", "US85590A4013", "US8581191009", "US8589121081",
"US8681571084", "US8685361037", "US8712371033", "US8793691069",
"US8799391060", "US8832031012", "US8851601018", "US8865471085",
"US8873891043", "US88830M1027", "US8968181011", "US89785X1019",
"US8990355054", "US90385D1072", "US9134831034", "US9202531011",
"US92552R4065", "US9410531001", "US9427491025", "US9433151019",
"US9633201069", "US9837721045"), class = "factor"), X2002.06.27 = structure(c(57L,
43L, 73L, 70L, 35L, 114L, 58L, 88L, 55L, 7L, 72L, 28L, 16L, 84L,
110L, 44L, 75L, 20L, 99L, 18L, 10L, 80L, 113L, 52L, 66L, 36L,
60L, 101L, 107L, 103L, 34L, 22L, 81L, 40L, 1L, 46L, 108L, 106L,
91L, 37L, 98L, 9L, 104L, 115L, 54L, 100L, 42L, 2L, 3L, 26L, 21L,
71L, 23L, 62L, 50L, 97L, 11L, 94L, 27L, 53L, 79L, 4L, 51L, 76L,
49L, 78L, 87L, 32L, 59L, 96L, 13L, 86L, 15L, 48L, 109L, 29L,
85L, 68L, 17L, 41L, 64L, 31L, 8L, 38L, 90L, 45L, 12L, 56L, 6L,
39L, 92L, 63L, 5L, 82L, 19L, 89L, 69L, 74L, 25L, 95L, 105L, 61L,
67L, 14L, 112L, 111L, 102L, 83L, 93L, 33L, 30L, 47L, 65L, 24L,
77L), .Label = c("CH0044328745", "GB00BVVBC028", "LR0008862868",
"US0003611052", "US0010841023", "US0044981019", "US0079731008",
"US0116591092", "US0305061097", "US0311001004", "US0383361039",
"US03937C1053", "US0462241011", "US06652V2088", "US0733021010",
"US0952291005", "US0997241064", "US1096411004", "US1096961040",
"US1265011056", "US12686C1099", "US1311931042", "US1431301027",
"US1564311082", "US1628251035", "US1630721017", "US1897541041",
"US2017231034", "US23331A1097", "US2829141009", "US2925621052",
"US29444U7000", "US2974251009", "US3024913036", "US3138551086",
"US34354P1057", "US3596941068", "US3693851095", "US3719011096",
"US3825501014", "US3984331021", "US3989051095", "US4108671052",
"US4130861093", "US4158641070", "US4456581077", "US4586653044",
"US4606901001", "US48666K1097", "US5006432000", "US5053361078",
"US5138471033", "US5179421087", "US5246601075", "US5260571048",
"US5463471053", "US5526761086", "US5535301064", "US5663301068",
"US5766901012", "US59001A1025", "US6117421072", "US63935N1072",
"US6515871076", "US67066G1040", "US6795801009", "US6819191064",
"US6900701078", "US6907684038", "US6935061076", "US6936561009",
"US6951561090", "US7004162092", "US73179P1066", "US7376301039",
"US7401891053", "US74762E1029", "US7496851038", "US7549071030",
"US7757111049", "US7811821005", "US8305661055", "US8308791024",
"US8335511049", "US83545G1022", "US8354951027", "US8358981079",
"US8545021011", "US85590A4013", "US86732Y1091", "US8681681057",
"US8712371033", "US87305R1095", "US8799391060", "US8851601018",
"US88830M1027", "US8894781033", "US8962391004", "US8968181011",
"US89785X1019", "US9022521051", "US90385D1072", "US9046772003",
"US9111631035", "US9134831034", "US92552R4065", "US92552V1008",
"US9258151029", "US9292361071", "US9410531001", "US9427491025",
"US9433151019", "US9699041011", "US9746371007", "US9807451037"
), class = "factor")), .Names = c("X2002.06.26", "X2002.06.27"
), class = "data.frame", row.names = c(NA, -115L))
The actual data extends over 3000+ columns and there are approximately 1150 unique values.
I need to count how many times each of these values appears in the Data-Set.
We can try to flat the elements in the data frame first, then apply the table() method:
tab1 <- table(do.call(c, lapply(df, as.character)))
Another option is to convert the data frame to matrix then apply table method:
tab2 <- table(as.matrix(df))
identical(tab1, tab2)
[1] TRUE

Select a set of edges which create the largest graph given that some edges are mutually exclusive of others

I'm trying to determine how to best tackle this problem.
Given a set of nodes and multiple, conflicting ways in which they could be connected I need to select the set of non-conflicting relations such that largest number of nodes remain in connected.
Example.
Here is a graph including all possible relations (edges) ignoring conflicts. Eg., this image doesn't depict the dependence of the edges on each other.
All edges attached to a specific node are dependent on one another. For simplicity each edge implies an attribute to each node it connects say A...Z. If an edge connecting nodes 3 and 16 specifies attributes 3-B and 16-F, then all edges connecting 16 to other nodes must have attribute 16-F. Similarly all edges connecting 3 to other nodes must have attribute 3-B.
Here is the same graph when specifying attribute F to node 16. This attribute removes most edges leaving one edge connecting 16-4 and one edge connecting 16-3. This has left no edges between 16-42.
(16 is near the left in both images.)
This image does not illustrate that the edge connecting 3-42 will specify an attribute for node 42, say 42-X. This will further constrain connections to 42 and further break up the graph. I have not displayed this because this is what my question pertains to.
I am looking for advice.
Is this a known problem? Can you point me to any references?
How
would you approach this problem? My best idea is to iterate,
starting at each edge, over all possible attributes. Evaluate each
partitioning and find which preserves the largest network. This
sounds challenging though and I could use some help.
If this is the solution is there a way using igraph in R to specify an "edge attribute constraint" and pull out the resulting, fragmented graph.
I have dput the graph here:
df = structure(list(nodeA = c(3L, 4L, 42L, 43L, 44L, 29L, 30L, 29L, 30L, 3L, 4L, 6L, 43L, 44L, 43L, 44L, 29L, 30L, 29L, 30L, 52L, 29L, 30L, 35L, 25L, 35L, 25L, 43L, 44L, 29L, 30L, 3L, 4L, 43L, 44L, 29L, 30L, 25L, 29L, 30L, 42L, 3L, 4L, 17L, 43L, 44L, 29L, 30L, 29L, 30L, 17L, 17L, 29L, 30L, 6L, 43L, 44L, 29L, 30L, 52L, 35L, 35L, 25L, 25L, 24L, 24L, 43L, 44L, 29L, 30L, 35L, 35L, 25L, 25L, 24L, 24L, 43L, 44L, 29L, 30L, 35L, 35L, 25L, 25L, 24L, 24L, 52L, 42L, 3L, 42L, 42L, 3L, 4L, 42L, 25L, 42L, 25L, 3L, 4L, 42L, 3L, 4L, 17L, 35L, 3L, 4L, 35L, 43L, 44L, 29L, 30L, 35L, 35L, 35L, 52L, 25L, 25L, 24L, 24L, 35L, 29L, 30L, 3L, 4L, 43L, 44L, 29L, 30L, 25L, 29L, 30L, 52L, 43L, 44L, 29L, 30L, 25L, 29L, 30L, 3L, 4L, 43L, 44L, 29L, 30L, 52L, 43L, 44L, 43L, 44L, 29L, 30L, 3L, 4L, 43L, 44L, 29L, 30L, 52L, 52L, 43L, 44L, 29L, 30L, 35L, 52L, 52L, 3L, 4L, 43L, 44L, 29L, 30L, 52L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 17L, 17L, 42L, 42L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 3L, 4L, 25L, 25L, 16L, 16L, 3L, 4L, 43L, 44L, 24L, 3L, 4L, 52L, 52L, 17L, 35L, 35L, 35L, 17L, 3L, 4L, 6L, 35L, 42L, 42L, 42L, 42L, 3L, 4L, 17L, 25L, 17L, 17L, 29L, 30L, 25L, 3L, 4L, 29L, 30L, 3L, 4L, 17L, 17L, 17L, 35L, 3L, 4L, 17L, 17L, 17L, 29L, 30L, 43L, 44L, 43L, 44L, 29L, 30L, 17L, 6L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 3L, 43L, 44L, 29L, 30L, 3L, 43L, 44L, 29L, 30L, 17L, 17L, 42L, 42L, 25L, 42L, 25L, 43L, 44L, 29L, 30L, 42L, 17L, 17L, 42L, 42L, 43L, 44L, 29L, 30L, 25L, 29L, 30L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 25L, 29L, 30L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 43L, 44L, 29L, 30L, 25L, 25L, 25L, 25L), nodeB = c(16L, 16L, 17L, 24L, 24L, 25L, 25L, 35L, 35L, 16L, 16L, 17L, 24L, 24L, 24L, 24L, 25L, 25L, 25L, 25L, 35L, 35L, 35L, 43L, 43L, 44L, 44L, 24L, 24L, 25L, 25L, 16L, 16L, 24L, 24L, 25L, 25L, 35L, 35L, 35L, 16L, 16L, 16L, 24L, 24L, 24L, 25L, 25L, 35L, 35L, 43L, 44L, 52L, 52L, 17L, 24L, 24L, 25L, 25L, 35L, 43L, 44L, 29L, 30L, 43L, 44L, 24L, 24L, 25L, 25L, 43L, 44L, 29L, 30L, 43L, 44L, 24L, 24L, 25L, 25L, 43L, 44L, 29L, 30L, 43L, 44L, 17L, 24L, 42L, 43L, 44L, 16L, 16L, 17L, 35L, 17L, 35L, 16L, 16L, 52L, 16L, 16L, 6L, 25L, 16L, 16L, 52L, 24L, 24L, 25L, 25L, 43L, 44L, 25L, 25L, 29L, 30L, 43L, 44L, 17L, 42L, 42L, 16L, 16L, 24L, 24L, 25L, 25L, 35L, 35L, 35L, 35L, 24L, 24L, 25L, 25L, 35L, 35L, 35L, 16L, 16L, 24L, 24L, 25L, 25L, 35L, 17L, 17L, 24L, 24L, 25L, 25L, 16L, 16L, 24L, 24L, 25L, 25L, 25L, 35L, 24L, 24L, 25L, 25L, 25L, 29L, 30L, 16L, 16L, 24L, 24L, 25L, 25L, 35L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 43L, 44L, 3L, 4L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 16L, 16L, 35L, 35L, 3L, 4L, 16L, 16L, 17L, 17L, 17L, 16L, 16L, 29L, 30L, 6L, 25L, 29L, 30L, 42L, 16L, 16L, 25L, 52L, 16L, 16L, 16L, 16L, 16L, 16L, 24L, 35L, 43L, 44L, 52L, 52L, 35L, 16L, 16L, 52L, 52L, 16L, 16L, 24L, 43L, 44L, 25L, 16L, 16L, 24L, 43L, 44L, 52L, 52L, 17L, 17L, 24L, 24L, 25L, 25L, 52L, 42L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 42L, 24L, 24L, 25L, 25L, 42L, 24L, 24L, 25L, 25L, 43L, 44L, 4L, 17L, 35L, 17L, 35L, 24L, 24L, 25L, 25L, 16L, 43L, 44L, 4L, 4L, 24L, 24L, 25L, 25L, 35L, 35L, 35L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 35L, 35L, 35L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 24L, 24L, 25L, 25L, 35L, 35L, 35L, 35L), attributeA = c(25L, 25L, 130L, 110L, 110L, 110L, 110L, 113L, 113L, 43L, 43L, 71L, 5L, 5L, 127L, 127L, 5L, 5L, 127L, 127L, 72L, 130L, 130L, 137L, 140L, 137L, 140L, 6L, 6L, 6L, 6L, 56L, 56L, 137L, 137L, 137L, 137L, 130L, 140L, 140L, 29L, 68L, 68L, 56L, 143L, 143L, 143L, 143L, 146L, 146L, 43L, 43L, 45L, 45L, 46L, 80L, 80L, 80L, 80L, 47L, 11L, 11L, 80L, 80L, 80L, 80L, 84L, 84L, 84L, 84L, 14L, 14L, 84L, 84L, 84L, 84L, 90L, 90L, 90L, 90L, 18L, 18L, 90L, 90L, 90L, 90L, 110L, 37L, 122L, 114L, 114L, 108L, 108L, 58L, 27L, 136L, 109L, 26L, 26L, 115L, 111L, 111L, 78L, 109L, 112L, 112L, 78L, 114L, 114L, 114L, 114L, 37L, 37L, 47L, 73L, 114L, 114L, 114L, 114L, 128L, 111L, 111L, 125L, 125L, 54L, 54L, 54L, 54L, 45L, 58L, 58L, 143L, 55L, 55L, 55L, 55L, 126L, 136L, 136L, 44L, 44L, 56L, 56L, 56L, 56L, 145L, 68L, 68L, 57L, 57L, 57L, 57L, 128L, 128L, 58L, 58L, 58L, 58L, 143L, 146L, 59L, 59L, 59L, 59L, 126L, 70L, 70L, 129L, 129L, 60L, 60L, 60L, 60L, 73L, 61L, 61L, 61L, 61L, 62L, 62L, 62L, 62L, 124L, 124L, 91L, 91L, 63L, 63L, 63L, 63L, 64L, 64L, 64L, 64L, 65L, 65L, 65L, 65L, 135L, 135L, 58L, 136L, 127L, 127L, 57L, 57L, 143L, 143L, 68L, 138L, 138L, 143L, 143L, 80L, 136L, 126L, 126L, 109L, 139L, 139L, 128L, 80L, 110L, 112L, 113L, 30L, 141L, 141L, 135L, 70L, 125L, 125L, 126L, 126L, 142L, 69L, 69L, 128L, 128L, 144L, 144L, 138L, 128L, 128L, 142L, 145L, 145L, 139L, 129L, 129L, 130L, 130L, 121L, 121L, 79L, 79L, 79L, 79L, 91L, 109L, 82L, 82L, 82L, 82L, 86L, 86L, 86L, 86L, 88L, 88L, 88L, 88L, 97L, 92L, 92L, 92L, 92L, 118L, 94L, 94L, 94L, 94L, 107L, 107L, 89L, 138L, 111L, 140L, 113L, 116L, 116L, 116L, 116L, 1L, 134L, 134L, 92L, 19L, 135L, 135L, 135L, 135L, 128L, 138L, 138L, 136L, 136L, 136L, 136L, 137L, 137L, 137L, 137L, 130L, 140L, 140L, 138L, 138L, 138L, 138L, 139L, 139L, 139L, 139L, 140L, 140L, 140L, 140L, 138L, 140L, 144L, 146L), attributeB = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 6L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 11L, 11L, 11L, 11L, 13L, 13L, 13L, 13L, 13L, 13L, 14L, 14L, 14L, 14L, 17L, 17L, 17L, 17L, 17L, 17L, 18L, 18L, 18L, 18L, 19L, 19L, 19L, 19L, 19L, 23L, 23L, 23L, 23L, 24L, 24L, 25L, 25L, 25L, 27L, 27L, 28L, 28L, 29L, 29L, 29L, 36L, 36L, 36L, 36L, 36L, 36L, 37L, 37L, 37L, 37L, 37L, 37L, 38L, 38L, 38L, 41L, 41L, 41L, 41L, 41L, 41L, 41L, 41L, 41L, 41L, 42L, 42L, 42L, 42L, 42L, 42L, 42L, 43L, 43L, 43L, 43L, 43L, 43L, 43L, 44L, 44L, 44L, 44L, 44L, 44L, 45L, 45L, 45L, 45L, 45L, 45L, 45L, 45L, 46L, 46L, 46L, 46L, 46L, 46L, 46L, 47L, 47L, 47L, 47L, 47L, 47L, 47L, 48L, 48L, 48L, 48L, 49L, 49L, 49L, 49L, 49L, 49L, 50L, 50L, 50L, 50L, 50L, 50L, 51L, 51L, 51L, 51L, 52L, 52L, 52L, 52L, 54L, 54L, 54L, 55L, 56L, 56L, 56L, 56L, 56L, 56L, 57L, 58L, 58L, 58L, 58L, 59L, 59L, 59L, 59L, 59L, 60L, 60L, 60L, 60L, 62L, 63L, 64L, 65L, 66L, 66L, 66L, 66L, 66L, 66L, 66L, 66L, 67L, 68L, 68L, 68L, 68L, 70L, 70L, 70L, 70L, 70L, 71L, 72L, 72L, 72L, 72L, 72L, 72L, 72L, 77L, 77L, 78L, 78L, 78L, 78L, 79L, 80L, 81L, 81L, 81L, 81L, 85L, 85L, 85L, 85L, 87L, 87L, 87L, 87L, 89L, 91L, 91L, 91L, 91L, 92L, 93L, 93L, 93L, 93L, 96L, 96L, 97L, 108L, 108L, 110L, 110L, 115L, 115L, 115L, 115L, 117L, 117L, 117L, 118L, 122L, 125L, 125L, 125L, 125L, 125L, 125L, 125L, 126L, 126L, 126L, 126L, 127L, 127L, 127L, 127L, 127L, 127L, 127L, 128L, 128L, 128L, 128L, 129L, 129L, 129L, 129L, 130L, 130L, 130L, 130L, 135L, 137L, 141L, 143L)), .Names = c("nodeA", "nodeB", "attributeA", "attributeB" ), row.names = c(3L, 4L, 5L, 7L, 8L, 9L, 10L, 12L, 13L, 18L, 19L, 20L, 24L, 25L, 26L, 27L, 28L, 29L, 31L, 32L, 35L, 36L, 37L, 38L, 39L, 40L, 41L, 52L, 53L, 54L, 55L, 59L, 60L, 62L, 63L, 64L, 65L, 71L, 72L, 73L, 78L, 82L, 83L, 86L, 87L, 88L, 89L, 90L, 96L, 97L, 98L, 99L, 108L, 109L, 112L, 114L, 115L, 116L, 117L, 120L, 121L, 122L, 129L, 131L, 134L, 135L, 141L, 142L, 143L, 144L, 146L, 147L, 153L, 154L, 156L, 157L, 163L, 164L, 165L, 166L, 168L, 169L, 175L, 176L, 178L, 179L, 183L, 186L, 187L, 188L, 189L, 196L, 197L, 198L, 201L, 204L, 206L, 208L, 209L, 213L, 216L, 217L, 221L, 222L, 225L, 226L, 230L, 241L, 242L, 243L, 244L, 248L, 249L, 255L, 256L, 259L, 260L, 264L, 265L, 272L, 276L, 277L, 284L, 285L, 287L, 288L, 289L, 290L, 292L, 293L, 294L, 295L, 303L, 304L, 305L, 306L, 308L, 309L, 310L, 315L, 316L, 318L, 319L, 320L, 321L, 325L, 333L, 334L, 336L, 337L, 338L, 339L, 347L, 348L, 350L, 351L, 352L, 353L, 354L, 359L, 365L, 366L, 367L, 368L, 369L, 373L, 374L, 381L, 382L, 384L, 385L, 386L, 387L, 390L, 395L, 396L, 397L, 398L, 406L, 407L, 408L, 409L, 411L, 412L, 416L, 417L, 421L, 422L, 423L, 424L, 430L, 431L, 432L, 433L, 438L, 439L, 440L, 441L, 447L, 448L, 450L, 452L, 454L, 455L, 456L, 457L, 458L, 459L, 468L, 472L, 473L, 476L, 477L, 481L, 483L, 484L, 485L, 488L, 493L, 494L, 495L, 501L, 504L, 508L, 511L, 512L, 513L, 514L, 516L, 518L, 519L, 520L, 523L, 524L, 526L, 528L, 529L, 534L, 535L, 538L, 539L, 540L, 543L, 544L, 550L, 555L, 556L, 558L, 561L, 562L, 564L, 565L, 576L, 577L, 582L, 583L, 584L, 585L, 590L, 594L, 596L, 597L, 598L, 599L, 605L, 606L, 607L, 608L, 613L, 614L, 615L, 616L, 620L, 622L, 623L, 624L, 625L, 629L, 631L, 632L, 633L, 634L, 643L, 644L, 647L, 657L, 660L, 665L, 666L, 673L, 674L, 675L, 676L, 691L, 692L, 693L, 696L, 700L, 705L, 706L, 707L, 708L, 711L, 712L, 713L, 720L, 721L, 722L, 723L, 728L, 729L, 730L, 731L, 733L, 734L, 735L, 741L, 742L, 743L, 744L, 750L, 751L, 752L, 753L, 759L, 760L, 761L, 762L, 772L, 777L, 787L, 790L), class = "data.frame")
library(igraph)
g = graph.data.frame(df)
plot(g, vertex.size = 6, edge.arrow.mode=1, edge.arrow.size = 0)
> head(df)
nodeA nodeB attributeA attributeB
1 3 16 25 1
4 4 16 25 1
5 42 17 130 1
7 43 24 110 1
8 44 24 110 1
9 29 25 110 1
In the above, row 1 attributeA is the exclusive attribute for node 3 such that all other edges connecting to node 3 must have attribute 25. Similarly, attributeB indicates that all edges connecting to node 16 must have the attribute 1. It is not necessary that row 1 be an edge, but it is necessary that no retained edges conflict.
Thanks for reading!
Is this a known problem? Can you point me to any references?
This is quite an interesting problem, and not one that I've encountered before.
How would you approach this problem?
I would approach this problem from an integer programming perspective. The decision variables will be used to select the attribute of each node (only edges labeled with the attributes of both of their endpoints will be allowed). Further, we will select a "root node" that we expect to be in the large connected component, and we will create flow outward from this root node. Each other node will have demand 1, and flow will only be possible over valid edges. We will maximize the amount of flow pushed out from the root node; this will be the number of other nodes in the large component.
To achieve this formulation, I would create two classes of variables:
Node attribute variables: For each node i and attribute a, I would create a binary variable z_ia that is 1 if node i is assigned attribute a and 0 otherwise.
Flow variables: For each edge from node i to j (I assume "from" is nodeA in your data frame and "to" is nodeB in your data frame), variable x_ij indicates the amount of flow from i to j (negative values indicate flow from j to i).
We also have a number of different constraints:
Each node only has 1 attribute: This can be achieved with \sum_{a\in A} z_ia = 1 for each node i, where A is the set of all attributes.
Edge flows are 0 if the edge is not valid: For each edge from i to j with attributes a and b, respectively, we will have x_ij <= n*z_ia, x_ij <= n*z_jb, x_ij >= -n*z_ia, and x_ij >= -n*z_jb. In all four constraints, n is the total number of nodes. These constraints will force x_ij=0 if z_ia=0 or z_jb=0, and otherwise will not be binding.
The net flow to any non-root node falls in [0, 1]: This constraint ensures that all outflow must come from the root, so nodes can only get flow if they are connected to the root. For each non-root node i with edges incoming from node set I and edges outgoing to node set O, these constraints are of the form \sum_{j\in I} x_ji - \sum_{j\in O} x_ij >= 0 and \sum_{j\in I} x_ji - \sum_{j\in O} x_ij <= 1.
The objective is to maximize the amount of flow out of the root node r. If r has incoming edges from nodes in set I and outgoing edges to nodes in set O, then this objective (which we maximize) is \sum_{j\in O} x_ji - \sum_{j\in I} x_ij.
With these variables and constraints in place, all you need to do is specify the root node r and solve; the solution will indicate the best possible assignment of attributes to nodes, assuming that r is in the largest component. If you re-solved for each root node r, you would end up with the global optimal assignment.
The following in an implementation of this approach with the lpSolve package in R:
library(lpSolve)
optim <- function(df, r) {
# Some book keeping
nodes = c(df$nodeA, df$nodeB)
u.nodes <- unique(nodes)
if (!r %in% u.nodes) {
stop("Invalid root node provided")
}
n.node <- length(u.nodes)
attrs = c(df$attributeA, df$attributeB)
node.attrs <- do.call(rbind, lapply(u.nodes, function(x) {
data.frame(node=x, attr=unique(attrs[nodes == x]))
}))
n.na <- nrow(node.attrs)
n.e <- nrow(df)
# Constraints limiting each node to have exactly one attribute
node.one.attr <- t(sapply(u.nodes, function(i) {
c(node.attrs$node == i, rep(0, 2*n.e))
}))
node.one.attr.dir <- rep("==", n.node)
node.one.attr.rhs <- rep(1, n.node)
# Constraints limiting edges to only be used if both attributes are selected
edge.flow <- do.call(rbind, lapply(seq_len(n.e), function(idx) {
i <- df$nodeA[idx]
j <- df$nodeB[idx]
a <- df$attributeA[idx]
b <- df$attributeB[idx]
na.i <- node.attrs$node == i & node.attrs$attr == a
na.j <- node.attrs$node == j & node.attrs$attr == b
rbind(c(-n.node*na.i, seq_len(n.e) == idx, -(seq_len(n.e) == idx)),
c(-n.node*na.j, seq_len(n.e) == idx, -(seq_len(n.e) == idx)),
c(n.node*na.i, seq_len(n.e) == idx, -(seq_len(n.e) == idx)),
c(n.node*na.j, seq_len(n.e) == idx, -(seq_len(n.e) == idx)))
}))
edge.flow.dir <- rep(c("<=", "<=", ">=", ">="), n.e)
edge.flow.rhs <- rep(0, 4*n.e)
# Constraints limiting net flow on non-root nodes
net.flow <- do.call(rbind, lapply(u.nodes, function(i) {
if (i == r) {
return(NULL)
}
rbind(c(rep(0, n.na), (df$nodeB == i) - (df$nodeA == i),
-(df$nodeB == i) + (df$nodeA == i)),
c(rep(0, n.na), (df$nodeB == i) - (df$nodeA == i),
-(df$nodeB == i) + (df$nodeA == i)))
}))
net.flow.dir <- rep(c(">=", "<="), n.node-1)
net.flow.rhs <- rep(c(0, 1), n.node-1)
# Build the model
mod <- lp(direction = "max",
objective.in = c(rep(0, n.na), (df$nodeA == r) - (df$nodeB == r),
-(df$nodeA == r) + (df$nodeB == r)),
const.mat = rbind(node.one.attr, edge.flow, net.flow),
const.dir = c(node.one.attr.dir, edge.flow.dir, net.flow.dir),
const.rhs = c(node.one.attr.rhs, edge.flow.rhs, net.flow.rhs),
binary.vec = seq_len(n.na))
opt <- node.attrs[mod$solution[1:n.na] > 0.999,]
valid.edges <- df[opt$attr[match(df$nodeA, opt$node)] == df$attributeA &
opt$attr[match(df$nodeB, opt$node)] == df$attributeB,]
list(attrs = opt,
edges = valid.edges,
objval = mod$objval)
}
It can solve the problem for subsets of the nodes in your original graph, but it becomes quite slow as you include an increasing number of nodes:
# Limit to 5 nodes
keep <- c(3, 4, 6, 16, 42)
df.play <- df[df$nodeA %in% keep & df$nodeB %in% keep,]
(opt.play <- optim(df.play, 42))
# $attrs
# node attr
# 24 3 50
# 45 4 50
# 50 42 91
# 60 16 127
# 87 6 109
#
# $edges
# nodeA nodeB attributeA attributeB
# 416 42 3 91 50
# 417 42 4 91 50
#
# $objval
# [1] 2
That run took 15 seconds. To speed this up, you could consider switching to a more powerful solver such as cplex or gurobi. These solvers are free for academic use but non-free otherwise.
If this is the solution is there a way using igraph in R to specify an "edge attribute constraint" and pull out the resulting, fragmented graph.
Yes, given the attributes you can easily subset and plot the graph. For the 5-node example that I solved above:
g <- graph.data.frame(opt.play$edges, vertices=unique(c(df.play$nodeA, df.play$nodeB)))
plot(g, vertex.size = 6, edge.arrow.mode=1, edge.arrow.size = 0)
While working through this problem I stumbled upon a simpler solution. It seems my formulation of the problem was making the answer hard to see.
The core of the matter is: when two different constraints are applied to a node it effectively becomes two distinct nodes.
Framing the challenge in this way allows us to rapidly construct graphs for each set of constraints. We can then quickly inspect these, look at the size, and (as my original question desired) select the set of constraints which preserves the largest graph.
g = graph.data.frame(df); plot(g, vertex.size = 6, edge.arrow.mode=1, edge.arrow.size = 0)
# Combine the node and the rule into a new, unique node id referencing both the node and the constraint
df.split = c(df[,1:2]) + df[,3:4]*1E3
# Keep track of edge numbers in this dataset for later
df.split = cbind(df.split, row = seq(nrow(df)))
g.split = graph.data.frame(df.split); plot(g.split, vertex.size = 6, edge.arrow.mode=1, edge.arrow.size = 0)
# Decompose into unlinked sub graphs and count the edges in each
g.list = decompose.graph(g.split)
g.list.nodenum = sapply(g.list, ecount)
head(g.list.nodenum[order(g.list.nodenum, decreasing=T)])
[1] 9 8 5 5 5 5
# Select the largest subgraph
g.sub = g.list[[order(g.list.nodenum, decreasing=T)[1]]]
plot(g.sub)
# Find what edges these were in the original dataset
originaledges = E(g.sub)$row
originaledges
[1] 129 157 130 158 131 159 212 213 132
# Play with the resulting graph, the largest graph which obeys constraints at all nodes.
df.largest = df[originaledges,]
df.largest
nodeA nodeB attributeA attributeB
292 25 35 45 41
352 29 25 58 45
293 29 35 58 41
353 30 25 58 45
294 30 35 58 41
354 52 25 143 45
476 52 29 143 58
477 52 30 143 58
295 52 35 143 41
g.largest = graph.data.frame(df.largest); plot(g.largest, vertex.size = 6, edge.arrow.mode=1, edge.arrow.size = 0)
Hopefully this helps someone someday!

Plotting a dendrogram with only a subset of observations in R

From an hclust object, how can I extract only selected observations (to_plot below) and plot a dendrogram from these selected observations? This subset of observations I want to plot as a dendrogram, will not correspond to the tree structure of the hclust object, so I can't extract branches from the dendrogram.
NB. I do not wish to cluster or calculate the distance matrix using the subset of selected observations
Data
1/ hclust object
structure(list(merge = structure(c(-31L, -62L, -46L, -37L, -55L,
-47L, -75L, -57L, -6L, -2L, -45L, -99L, -51L, -12L, -30L, -4L,
3L, -53L, -61L, -27L, -56L, -83L, -38L, -101L, -69L, -11L, -14L,
-21L, -34L, -48L, -82L, -92L, -15L, -7L, -35L, -65L, -105L, -52L,
-40L, -64L, -23L, -94L, -98L, -1L, -25L, -8L, 8L, -41L, -3L,
-33L, -108L, 23L, -58L, -20L, -5L, -93L, 30L, -68L, -49L, -28L,
-17L, 9L, -32L, 35L, -95L, -67L, 26L, -107L, 17L, -19L, -74L,
-63L, 37L, 20L, -84L, 50L, -10L, -13L, 49L, 34L, 39L, 60L, -16L,
63L, 44L, 29L, 10L, -24L, 75L, 73L, 47L, 61L, 57L, 18L, 66L,
43L, 80L, 83L, -78L, -71L, 90L, 93L, 84L, 94L, 102L, 98L, 100L,
87L, 106L, 108L, -97L, 1L, -100L, -43L, -59L, -106L, 4L, -90L,
5L, 2L, -87L, -103L, -86L, -54L, -89L, -42L, 11L, 13L, 12L, -77L,
7L, 14L, 6L, -110L, 22L, -60L, -44L, -91L, -111L, -102L, -88L,
-104L, -50L, -22L, -36L, -79L, 28L, 24L, -66L, 15L, -29L, 25L,
32L, -109L, -39L, 45L, 42L, -96L, 16L, 33L, 19L, 40L, 27L, 31L,
-9L, 41L, 46L, -80L, -81L, -70L, -26L, 21L, -73L, 48L, 38L, 36L,
53L, 56L, 51L, -72L, -85L, -76L, 52L, 58L, 71L, 59L, 64L, -18L,
68L, 54L, 55L, 65L, 70L, 79L, 72L, 74L, 69L, 78L, 77L, 76L, 62L,
81L, 82L, 67L, 86L, 85L, 95L, 89L, 92L, 88L, 91L, 97L, 96L, 99L,
103L, 104L, 105L, 101L, 107L, 109L), .Dim = c(110L, 2L)), height = c(0,
0.188350217744365, 0.247401000321179, 0.249231910045009, 0.261866742195707,
0.377720124194474, 0.378461142310176, 0.527418629683044, 0.636480697844057,
0.70489556723743, 0.799857388088743, 0.895267189098051, 0.940604516439695,
1, 1, 1.25645841742159, 1.47637080579504, 1.49661353166068, 1.60280854934758,
1.64538982117314, 1.65011076915935, 1.66666666666667, 1.8661900064933,
1.91530600787293, 1.95979930296005, 2, 2, 2, 2, 2, 2, 2, 2.06532735656427,
2.32083831336158, 2.44558763136158, 2.48004395957454, 2.65074432837975,
2.69489799737569, 2.71536352494182, 2.75337988132381, 2.87695888696678,
2.89093184314013, 2.91669905927746, 3, 3.03504556878056, 3.42442760079317,
3.50924315636259, 3.54456009196554, 3.58118052752614, 3.80716728885077,
4.26149878117642, 4.63502500606874, 4.66666666666667, 4.66666666666667,
4.76912295317528, 4.90702353976517, 4.92512811564295, 5, 5.15887380396718,
5.20227981903921, 5.39890417564938, 5.71781232947912, 5.94961450567626,
6.17569787723772, 6.21000141305934, 6.47150288200403, 6.48552894195153,
6.61209720286382, 7.27379923250834, 7.65301130607984, 7.74920607244712,
7.8800745368487, 8.17570945188961, 8.75305138718179, 8.87870428752716,
9.36365055557565, 9.68439736325147, 10, 10.121604958431, 10.2845151775143,
10.7517404855684, 10.8165382868783, 11.4489962313067, 11.5939995243571,
12.8179231278111, 12.3055509866599, 14.1589468158871, 14.6988252554622,
14.7792803434488, 15.276874084329, 16.0150635281041, 17.9467649484583,
21.2687065983256, 21.3844895922187, 24.196270007066, 25.3163200486723,
34.1772731084418, 37.4454933955768, 42.6291683810462, 45.1916356921658,
52.531016897072, 55.6590891226214, 61.0699226448619, 73.7706208334886,
98.5310119994231, 148.608243702477, 150.474954574704, 187.419419688973,
241.610436881262, 487.90491231433), order = c(2L, 62L, 31L, 97L,
46L, 100L, 45L, 87L, 108L, 61L, 99L, 103L, 105L, 21L, 91L, 38L,
47L, 106L, 64L, 30L, 89L, 33L, 15L, 50L, 49L, 81L, 57L, 90L,
94L, 69L, 83L, 12L, 54L, 6L, 55L, 59L, 56L, 75L, 37L, 43L, 16L,
19L, 72L, 84L, 74L, 85L, 10L, 35L, 36L, 41L, 96L, 53L, 51L, 86L,
11L, 60L, 58L, 14L, 44L, 78L, 17L, 26L, 40L, 66L, 5L, 9L, 71L,
24L, 13L, 18L, 48L, 102L, 8L, 25L, 39L, 28L, 70L, 95L, 52L, 101L,
110L, 7L, 22L, 20L, 82L, 88L, 67L, 65L, 79L, 34L, 111L, 27L,
77L, 68L, 80L, 32L, 73L, 3L, 4L, 42L, 107L, 93L, 23L, 29L, 98L,
92L, 104L, 1L, 109L, 63L, 76L), labels = c("DX_100203", "DX_100208",
"DX_30528", "DX_100159", "DX_100211", "DX_100215", "DX_100246", "DX_100253",
"DX_100271", "DX_100212", "DX_100035", "DX_100164", "DX_100249", "DX_100036",
"DX_100165", "DX_100221", "DX_100254", "DX_100262", "DX_100274", "DX_100046",
"DX_100171", "DX_100230", "DX_100255", "DX_100275", "DX_100180", "DX_100269",
"DX_100278", "DX_100161", "DX_100229", "DX_100238", "DX_100093", "DX_100191",
"DX_100241", "DX_100237", "DX_100268", "DX_30515", "DX_90862", "DX_30529",
"DX_100073", "DX_90264", "DX_90221", "DX_30550", "DX_90885", "DX_100028",
"DX_100049", "DX_90257", "DX_90215", "DX_30527", "DX_30526", "DX_90892",
"DX_100051", "DX_90333", "DX_90286", "DX_90217", "DX_90252", "DX_90232",
"DX_30573", "DX_100214", "DX_90769", "DX_90907", "DX_100037", "DX_100054",
"DX_30568", "DX_90230", "DX_90280", "DX_90779", "DX_90959", "DX_100187",
"DX_100081", "DX_90310", "DX_90782", "DX_100023", "DX_90994", "DX_100042",
"DX_90304", "DX_100152", "DX_90272", "DX_90861", "DX_100043", "DX_100068",
"DX_30571", "DX_100085", "DX_90312", "DX_30590", "DX_90413", "DX_30561",
"DX_30548", "DX_90296", "DX_30558", "DX_90243", "DX_90293", "DX_90365",
"DX_30584", "DX_90274", "DX_90332", "DX_30583", "DX_30575", "DX_30523",
"DX_30578", "DX_90377", "DX_90297", "DX_30593", "DX_30555", "DX_30549",
"DX_90292", "DX_30565", "DX_30512", "DX_90285", "DX_90231", "DX_90209",
"DX_30570"), method = "ward", call = hclust(d = distance, method = method.hclust),
dist.method = "maximum"), .Names = c("merge", "height", "order",
"labels", "method", "call", "dist.method"), class = "hclust")
2/ subset of observations to extract for plotting as a dendrogram
to_plot <- c("DX_90264", "DX_90221", "DX_30550", "DX_90885", "DX_100028", "DX_100159",
"DX_100049", "DX_90257", "DX_90215", "DX_30527", "DX_30526", "DX_90892",
"DX_100051", "DX_90333", "DX_90286", "DX_90217", "DX_90252", "DX_90232",
"DX_30573", "DX_100214", "DX_90769", "DX_90907", "DX_100037", "DX_100054", "DX_30565")
Based on the comment of #RomanLuštrik I would suggest something like this:
hc <- hclust(dist(USArrests), "ave")
## select some observations to plot
set.seed(1)
toPlot <- sample(rownames(USArrests), size=20)
## use rownames as labels
labels <- rownames(USArrests)
## clear labels not present in toPlot
labels[ !(labels %in% toPlot) ] <- ""
plot(hc, labels=labels)

Resources