R split data into categories - r

I am trying to find the most efficient way to split a list of numbers into bins by value and then calculate a cumulative sum for each successive category.
I can't seem to get the value categories from this for the plot.
> scores
[1] 115 119 119 134 121 128 128 152 97 108 98 130 108 110 111 122 106 142 143 140 141 151 125 126
> table(cut(scores,breaks=10))
(96.9,102] (102,108] (108,113] (113,119] (119,124] (124,130] (130,136] (136,141] (141,147] (147,152]
2 1 4 1 4 5 1 2 2 2
> cumsum(table(cut(scores,breaks=10)))
(96.9,102] (102,108] (108,113] (113,119] (119,124] (124,130] (130,136] (136,141] (141,147] (147,152]
2 3 7 8 12 17 18 20 22 24
> plot(100*cumsum(table(cut(scores,breaks=10)))/length(scores),ylab="percent of scores")
> lines(100*cumsum(table(cut(scores,breaks=10)))/length(scores))
This produces an acceptable plot, which contains index values (2,4,6...). How can I get the values 96.9, 102, etc... Is there a better way to do this?

You need to set xaxt = "n" to force the plot not to display the x axis labels, and display them by yourself using axis while retrieving them using names
plot(100*cumsum(table(cut(scores,breaks=10)))/length(scores),ylab="percent of scores", xaxt = "n")
lines(100*cumsum(table(cut(scores,breaks=10)))/length(scores))
axis(1, 1:10, names(table(cut(scores,breaks=10))))

Related

cluster analysis with weight

I have a data frame 'heat' demonstrating people's performance across time.
'Var1' represents the code of persons.
'Var2' represents a time line (measured by number of days from the starting point).
'Variable' is the score they get at a given time point.
Var1 Var2 value
1 1 36 -0.6941826
2 2 36 -0.5585414
3 3 36 0.8032384
4 4 36 0.7973031
5 5 36 0.7536959
6 6 36 -0.5942059
....
54 10 73 0.7063218
55 11 73 -0.6949616
56 12 73 -0.6641516
57 13 73 0.6890433
58 14 73 0.6310124
59 15 73 -0.6305091
60 16 73 0.6809655
61 17 73 0.8957870
....
101 13 110 0.6495796
102 14 110 0.5990869
103 15 110 -0.6210600
104 16 110 0.6441960
105 17 110 0.7838654
....
Now I want to cluster their performance and reflect it on a heatmap. So I used the function dist() and hclust() to clustered the data frame and plotted it with ggplot2:
ggplot(data = heat) + geom_tile(aes(x = Var2, y = Var1 %>% as.character(),
fill = value)) +
scale_fill_gradient(low = "yellow",high = "red") +
geom_vline(xintercept = c(746, 2142, 2917))
It looks like this:
However, I am more interested in what happened around day 746, day 2142 and day 2917 (the black lines). I would like the scores around these days bearing more weight in the clustering. I want people demonstrating similar performance around these days to have more priority to be clustered together. Is there a way of doing this?
As long as your weights are integer, you supposedly can just replicate those days artificially.
If you want more control, just compute the distance matrix yourself, with whatever weighted distance you want to use.

Reordering columns of a dataframe on the basis of column mean

I have a data frame, which i want to reorder based on column mean. I want to reorder it by decreasing column mean
SNR SignalIntensity ID
1 1.0035798 6.817374 109
2 11.9438978 11.545993 110
4 3.2894878 9.780420 112
5 4.0170266 9.871984 113
6 1.6310523 9.078186 114
7 1.6405415 8.228931 116
8 1.6625413 8.043536 117
9 0.8489116 6.179346 118
10 7.5312260 10.558180 119
11 7.2832911 10.474533 120
12 0.5732577 4.157294 121
14 0.8149754 6.045174 124
I use the following code
means <- colMeans(df) ## to get mean
df <- df[,order(means)] ## to reorder
to get the mean of columns and the order, but i get the column in increasing mean, opposite of my interest. what should i do to reorder in decreasing column mean
expected output
ID SignalIntensity SNR
1 109 6.817374 1.0035798
2 110 11.545993 11.9438978
4 112 9.780420 3.2894878
5 113 9.871984 4.0170266
6 114 9.078186 1.6310523
7 116 8.228931 1.6405415
8 117 8.043536 1.6625413
9 118 6.179346 0.8489116
10 119 10.558180 7.5312260
11 120 10.474533 7.2832911
12 121 4.157294 0.5732577
14 124 6.045174 0.8149754
The default settings in order is decreasing=FALSE. We can change that to TRUE
df[order(means, decreasing=TRUE)]
Or get the order of negative values of 'means'
df[order(-means)]

Retrieving adjaceny values in a nng igraph object in R

edited to improve the quality of the question as a result of the (wholly appropriate) spanking received by Spacedman!
I have a k-nearest neighbors object (an igraph) which I created as such, by using the file I have uploaded here:
I performed the following operations on the data, in order to create an adjacency matrix of distances between observations:
W <- read.csv("/path/sim_matrix.csv")
W <- W[, -c(1,3)]
W <- scale(W)
sim_matrix <- dist(W, method = "euclidean", upper=TRUE)
sim_matrix <- as.matrix(sim_matrix)
mygraph <- nng(sim_matrix, k=10)
This give me a nice list of vertices and their ten closest neighbors, a small sample follows:
1 -> 25 26 28 30 32 144 146 151 177 183 2 -> 4 8 32 33 145 146 154 156 186 199
3 -> 1 25 28 51 54 106 144 151 177 234 4 -> 7 8 89 95 97 158 160 170 186 204
5 -> 9 11 17 19 21 112 119 138 145 158 6 -> 10 12 14 18 20 22 147 148 157 194
7 -> 4 13 123 132 135 142 160 170 173 174 8 -> 4 7 89 90 95 97 158 160 186 204
So far so good.
What I'm struggling with, however, is how to to get access to the values for the weights between the vertices that I can do meaningful calculations on. Shouldn't be so hard, this is a common thing to want from graphs, no?
Looking at the documentation, I tried:
degree(mygraph)
which gives me the sum of the weights for each node. But I don't want the sum, I want the raw data, so I can do my own calculations.
I tried
get.data.frame(mygraph,"E")[1:10,]
but this has none of the distances between nodes:
from to
1 1 25
2 1 26
3 1 28
4 1 30
5 1 32
6 1 144
7 1 146
8 1 151
9 1 177
10 1 183
I have attempted to get values for the weights between vertices out of the graph object, that I can work with, but no luck.
If anyone has any ideas on how to go about approaching this, I'd be grateful. Thanks.
It's not clear from your question whether you are starting with a dataset, or with a distance matrix, e.g. nng(x=mydata,...) or nng(dx=mydistancematrix,...), so here are solutions with both.
library(cccd)
df <- mtcars[,c("mpg","hp")] # extract from mtcars dataset
# knn using dataset only
g <- nng(x=as.matrix(df),k=5) # for each car, 5 other most similar mpg and hp
V(g)$name <- rownames(df) # meaningful names for the vertices
dm <- as.matrix(dist(df)) # full distance matrix
E(g)$weight <- apply(get.edges(g,1:ecount(g)),1,function(x)dm[x[1],x[2]])
# knn using distance matrix (assumes you have dm already)
h <- nng(dx=dm,k=5)
V(h)$name <- rownames(df)
E(h)$weight <- apply(get.edges(h,1:ecount(h)),1,function(x)dm[x[1],x[2]])
# same result either way
identical(get.data.frame(g),get.data.frame(h))
# [1] TRUE
So these approaches identify the distances from each vertex to it's five nearest neighbors, and set the edge weight attribute to those values. Interestingly, plot(g) works fine, but plot(h) fails. I think this might be a bug in the plot method for cccd.
If all you want to know is the distances from each vertex to the nearest neighbors, the code below does not require package cccd.
knn <- t(apply(dm,1,function(x)sort(x)[2:6]))
rownames(knn) <- rownames(df)
Here, the matrix knn has a row for each vertex and columns specifying the distance from that vertex to it's 5 nearest neighbors. It does not tell you which neighbors those are, though.
Okay, I've found a nng function in cccd package. Is that it? If so.. then mygraph is just an igraph object and you can just do E(mygraph)$whatever to get the names of the edge attributes.
Following one of the cccd examples to create G1 here, you can get a data frame of all the edges and attributes thus:
get.data.frame(G1,"E")[1:10,]
You can get/set individual edge attributes with E(g)$whatever:
> E(G1)$weight=1:250
> E(G1)$whatever=runif(250)
> get.data.frame(G1,"E")[1:10,]
from to weight whatever
1 1 3 1 0.11861240
2 1 7 2 0.06935047
3 1 22 3 0.32040316
4 1 29 4 0.86991432
5 1 31 5 0.47728632
Is that what you are after? Any igraph package tutorial will tell you more!

grep: How can i search through my data using a wildcard in R

I have recently started using R. So now I am trying to get some data out of it. However, the results I get are quite confusing. I have datas from the year 1961 to 1963 of everyday in the format 1961-04-25. I created a vector called: date
So when I try to use grep to just search for the period between April 10 and May 21 and display the dates I used this command:
date[date >= grep("196.-04-10", date, value = TRUE) &
date <= grep("196.-05-21", date, value = TRUE)]
The results I get is are somehow confusing as it is making 3 days steps instead of giving me every single day... see below.
[1] "1961-04-10" "1961-04-13" "1961-04-16" "1961-04-19" "1961-04-22" "1961-04-25" "1961-04-28" "1961-05-01" "1961-05-04" "1961-05-07" "1961-05-10"
[12] "1961-05-13" "1961-05-16" "1961-05-19" "1962-04-12" "1962-04-15" "1962-04-18" "1962-04-21" "1962-04-24" "1962-04-27" "1962-04-30" "1962-05-03"
[23] "1962-05-06" "1962-05-09" "1962-05-12" "1962-05-15" "1962-05-18" "1962-05-21" "1963-04-11" "1963-04-14" "1963-04-17" "1963-04-20" "1963-04-23"
[34] "1963-04-26" "1963-04-29" "1963-05-02" "1963-05-05" "1963-05-08" "1963-05-11" "1963-05-14" "1963-05-17" "1963-05-20"
I think the grep strategy is misguided, but maybe something like this will work ... basically, I'm computing the day-of-year (Julian date, yday()) and using that for comparison.
z <- as.Date(c("1961-04-10","1961-04-11","1961-04-12",
"1961-05-21","1961-05-22","1961-05-23",
"1963-04-09","1963-04-12","1963-05-21","1963-05-22"))
library(lubridate)
z[yday(z)>=yday(as.Date("1961-04-10")) & yday(z)<=yday(as.Date("1961-05-21"))]
## [1] "1961-04-10" "1961-04-11" "1961-04-12" "1961-05-21" "1963-04-12"
## [6] "1963-05-21"yz <- year(z)
Actually, this solution is fragile to leap-years ...
Better (?):
yz <- year(z)
z[z>=as.Date(paste0(yz,"-04-10")) & z<=as.Date(paste0(yz,"-05-21"))]
(You should definitely test this for yourself, I haven't tested carefully!)
Using a date format for your variable would be the best bet here.
## set up some test data
datevar <- seq.Date(as.Date("1961-01-01"),as.Date("1963-12-31"),by="day")
test <- data.frame(date=datevar,id=1:(length(datevar)))
head(test)
## which looks like:
> head(test)
date id
1 1961-01-01 1
2 1961-01-02 2
3 1961-01-03 3
4 1961-01-04 4
5 1961-01-05 5
6 1961-01-06 6
## find the date ranges you want
selectdates <-
(format(test$date,"%m") == "04" & as.numeric(format(test$date,"%d")) >= 10) |
(format(test$date,"%m") == "05" & as.numeric(format(test$date,"%d")) <= 21)
## subset the original data
result <- test[selectdates,]
## which looks as expected:
> result
date id
100 1961-04-10 100
101 1961-04-11 101
102 1961-04-12 102
103 1961-04-13 103
104 1961-04-14 104
105 1961-04-15 105
106 1961-04-16 106
107 1961-04-17 107
108 1961-04-18 108
109 1961-04-19 109
110 1961-04-20 110
111 1961-04-21 111
112 1961-04-22 112
113 1961-04-23 113
114 1961-04-24 114
115 1961-04-25 115
116 1961-04-26 116
117 1961-04-27 117
118 1961-04-28 118
119 1961-04-29 119
120 1961-04-30 120
121 1961-05-01 121
122 1961-05-02 122
123 1961-05-03 123
124 1961-05-04 124
125 1961-05-05 125
126 1961-05-06 126
127 1961-05-07 127
128 1961-05-08 128
129 1961-05-09 129
130 1961-05-10 130
131 1961-05-11 131
132 1961-05-12 132
133 1961-05-13 133
134 1961-05-14 134
135 1961-05-15 135
136 1961-05-16 136
137 1961-05-17 137
138 1961-05-18 138
139 1961-05-19 139
140 1961-05-20 140
141 1961-05-21 141
465 1962-04-10 465
...

Grouping ecological data in R

I'm looking at some ecological data (diet) and trying to work out how to group by Predator. I would like to be able to extract the data so that I can look at the weights of each individual prey for each species for each predator, i.e work out the mean weight of each species eaten by e.g Predator 117. I've put a sample of my data below.
Predator PreySpecies PreyWeight
1 114 10 4.2035496
2 114 10 1.6307026
3 115 1 407.7279775
4 115 1 255.5430495
5 117 10 4.2503708
6 117 10 3.6268814
7 117 10 6.4342073
8 117 10 1.8590861
9 117 10 2.3181421
10 117 10 0.9749844
11 117 10 0.7424772
12 117 15 4.2803743
13 118 1 126.8559155
14 118 1 276.0256158
15 118 1 123.0529734
16 118 1 427.1129793
17 118 3 237.0437606
18 120 1 345.1957190
19 121 1 160.6688815
You can use the aggregate function as follows:
aggregate(formula = PreyWeight ~ Predator + PreySpecies, data = diet, FUN = mean)
# Predator PreySpecies PreyWeight
# 1 115 1 331.635514
# 2 118 1 238.261871
# 3 120 1 345.195719
# 4 121 1 160.668881
# 5 118 3 237.043761
# 6 114 10 2.917126
# 7 117 10 2.886593
# 8 117 15 4.280374
There are a few different ways of getting what you want:
The aggregate function. Probably what you are after.
aggregate(PreyWeight ~ Predator + PreySpecies, data=dd, FUN=mean)
tapply: Very useful, but only divides the variable by a single factor, hence, we need to create a need joint factor with the paste command:
tapply(dd$PreyWeight, paste(dd$Predator, dd$PreySpecies), mean)
ddply: Part of the plyr package. Very useful. Worth learning.
require(plyr)
ddply(dd, .(Predator, PreySpecies), summarise, mean(PreyWeight))
dcast: The output is in more of a table format. Part of the reshape2 package.
require(reshape2)
dcast(dd, PreyWeight ~ PreySpecies+ Predator, mean, fill=0)
mean(data$PreyWeight[data$Predator==117]);

Resources