I have the following df:
TS A_f1 A_p B_f1 B_p C_f1 C_p
1 10 100 15 150 17 170
2 20 200 25 250 27 270
3 30 300 35 350 37 370
This is, however, only a simplification of my real df with 40k+ observations and 100+ features.
TS are timestamps - in every row there are stores listed ("A","B","C", n ...) with features (f1, p, f_n ...)
Before I want to train a LSTM on my df, I want to use the acf function (or pacf) to find some patterns on my data to do some feature selection beforehand.
Any idea, how I can do this with my data?
Related
I need to build an algorithm which will:
For 116 existing observations of 2 variables x1 and x2 (plotted individually: one single point)
Create new observations by merging extreme points of 2 existing observations (ex: observation 117 will have 2 extreme points, (x1_115, x2_115) and (x1_30, x2_30)). Do this for all combinations.
If, for one combination, one pair dominates the other: x1_a < x1_b AND x2_a < x2_b, only select a.
For the new set of 116+n newly created variables, remove the dominated pairs, in the same logic as above.
Continue until we cannot create new non-dominated pairs.
I'm trying to solve this problem by creating independent functions for each operation. So far I have created the ConvexUnion function which merges extreme points (simply the union of 2 observations), but it does not take into account dominance yet.
ConvexUnion <- function(a,b){
output = NULL
for (i in 1:ncol(a)) {
u = unique(rbind(a[,i],b[,i]), incomparables = FALSE)
output = cbind(output, u)
}
output #the extreme points of the newly created pair
}
a = matrix(c(50,70), ncol = 2)
b = matrix(c(60,85), ncol = 2)
v = ConvexUnion(a,b)
TRAFO LABOR DELLV CLIENTS
1 49 15023 180119 11828
2 54 3118 212988 13465
3 31 6016 81597 4787
4 39 8909 127263 10291
5 9 1789 30095 2205
6 59 8327 190405 12045
7 95 11985 288146 16379
8 54 11309 208009 12252
9 13 3844 53631 4426
10 148 26348 459371 39831
11 17 3968 48798 3210
12 157 20131 366409 27050
13 18 4614 60366 4673
14 17 5941 49042 3950
15 77 6449 226815 12584
Here, the result for the new pair, which is the so-called convex union of a and b, would be (50,70) because a dominates b (both x1 and x2 are smaller).
How do I solve the problem?
I am working with dplyr and sample_n in R and trying to get an even group of rows to work with in my data frame.
So, I have a data set, head of data as follows:
> head(SEH)
Time.Level Demo.Age SEH.Total
92 PRE 12 110
335 PRE 12 80
720 MID 14 85
196 MID 11 95
408 POST 18 60
184 POST 10 99
I separated out the data into three different data frames according to time level. So I have a SEH.pre, an SEH.mid and an SEH.post. I then do a describe and I know I have uneven groups of pre, mid, post. So, I want to random sample out pre, mid, post groups to be an even size. For example, I have the SEH.pre and SEH.mid group n sizes below:
> describe(SEH.pre)
vars n
Time.Level* 1 887
Demo.Age 2 883
SEH.Total 3 887
> describe(SEH.mid)
vars n
Time.Level* 1 894
Demo.Age 2 872
SEH.Total 3 894
So, now I run sample_n on the SEH.pre thinking that I can re-sample to an n of 860 across all columns. I run the following command:
SEH.pre2 <- sample_n(SEH.pre, 860, replace = FALSE)
And then I describe and the Demo.Age is less than the rest:
> describe(SEH.pre2)
vars n ...
Time.Level* 1 860
Demo.Age 2 856
SEH.Total 3 860
I feel like a big idiot but I cannot figure out why this is. I have tried it multiple times and Demo.Age varies from 856 to 859, but is never 860. I want all three columns to be 860. How do I do this? And why am I mis-thinking that sample_n should create even groups out of uneven?
I have dataset “data_file” which contains five columns & 1 million rows:
X1 "ID_Number" (numeric),
X2 “Sample_Type”,
X3 “Signal_X” (numeric),
X4 “Signal_Y” (numeric),
X5 “Signal_Z” (numeric).
Each value of the ID corresponds to a set of values “Signal_X”, “Signal_Y” and “Signal_Z”.
ID_Number :: Sample_Type :: Signal_X :: Signal_Y :: Signal_Z
2 Sample 337 1538 0.6314152
2 Sample 106 1840 0.9923422
…
2 Sample 94 1445 0.9967044
10 Sample 164 1777 0.9950826
10 Sample 183 1933 0.9931457
10 Sample 176 1590 0.9690951
…
10 Sample 139 1339 0.9820210
12 Sample 154 1397 0.9700886
12 Sample 144 1206 0.9457763
… etc
By scanning the ID I found the correlation coefficient b/w “Signal_X”
and “Signal_Y” using the following code:
library(plyr)
dataAE<- ddply(data_file, " ID_Number ", summarise, CorrelationCoefficient=cor(SignalX, SignalY))
View(dataAE)
The output should look like this.
datasetID Correlation Coefficient
1 2 0.48083503
2 3 -0.81036062
3 10 -0.32098672
4 12 -0.20251427
5 24 -0.18004939
6 51 -0.45803370
7 54 -0.59001642
8 63 -0.53976850
etc …
By analogy, I'm trying to find – to Compute Hopkins statistic & find optimal number of clusters for my
dataset.
library(clustertend)
set.seed(123)
hopkins(data_file, n = nrow(data_file)-1)
I tried to replace CorrelationCoefficient=cor(SignalX, SignalY) at
HopkinsStatistics=hopkins(SignalX, SignalY)
… And without results.
Manually & without problem for each ID set I used the following code
library(clustertend)
# Compute Hopkins statistic for dataset
set.seed(123)
subset$sampletype<- NULL
df<-scale(subset)
res <- get_clust_tendency(df, 40, graph = FALSE)
# Hopskin statistic
res$hopkins_stat
res
The problem is how to automate the calculations & Using loops.
Please help me. Thanks in advance.
edited to improve the quality of the question as a result of the (wholly appropriate) spanking received by Spacedman!
I have a k-nearest neighbors object (an igraph) which I created as such, by using the file I have uploaded here:
I performed the following operations on the data, in order to create an adjacency matrix of distances between observations:
W <- read.csv("/path/sim_matrix.csv")
W <- W[, -c(1,3)]
W <- scale(W)
sim_matrix <- dist(W, method = "euclidean", upper=TRUE)
sim_matrix <- as.matrix(sim_matrix)
mygraph <- nng(sim_matrix, k=10)
This give me a nice list of vertices and their ten closest neighbors, a small sample follows:
1 -> 25 26 28 30 32 144 146 151 177 183 2 -> 4 8 32 33 145 146 154 156 186 199
3 -> 1 25 28 51 54 106 144 151 177 234 4 -> 7 8 89 95 97 158 160 170 186 204
5 -> 9 11 17 19 21 112 119 138 145 158 6 -> 10 12 14 18 20 22 147 148 157 194
7 -> 4 13 123 132 135 142 160 170 173 174 8 -> 4 7 89 90 95 97 158 160 186 204
So far so good.
What I'm struggling with, however, is how to to get access to the values for the weights between the vertices that I can do meaningful calculations on. Shouldn't be so hard, this is a common thing to want from graphs, no?
Looking at the documentation, I tried:
degree(mygraph)
which gives me the sum of the weights for each node. But I don't want the sum, I want the raw data, so I can do my own calculations.
I tried
get.data.frame(mygraph,"E")[1:10,]
but this has none of the distances between nodes:
from to
1 1 25
2 1 26
3 1 28
4 1 30
5 1 32
6 1 144
7 1 146
8 1 151
9 1 177
10 1 183
I have attempted to get values for the weights between vertices out of the graph object, that I can work with, but no luck.
If anyone has any ideas on how to go about approaching this, I'd be grateful. Thanks.
It's not clear from your question whether you are starting with a dataset, or with a distance matrix, e.g. nng(x=mydata,...) or nng(dx=mydistancematrix,...), so here are solutions with both.
library(cccd)
df <- mtcars[,c("mpg","hp")] # extract from mtcars dataset
# knn using dataset only
g <- nng(x=as.matrix(df),k=5) # for each car, 5 other most similar mpg and hp
V(g)$name <- rownames(df) # meaningful names for the vertices
dm <- as.matrix(dist(df)) # full distance matrix
E(g)$weight <- apply(get.edges(g,1:ecount(g)),1,function(x)dm[x[1],x[2]])
# knn using distance matrix (assumes you have dm already)
h <- nng(dx=dm,k=5)
V(h)$name <- rownames(df)
E(h)$weight <- apply(get.edges(h,1:ecount(h)),1,function(x)dm[x[1],x[2]])
# same result either way
identical(get.data.frame(g),get.data.frame(h))
# [1] TRUE
So these approaches identify the distances from each vertex to it's five nearest neighbors, and set the edge weight attribute to those values. Interestingly, plot(g) works fine, but plot(h) fails. I think this might be a bug in the plot method for cccd.
If all you want to know is the distances from each vertex to the nearest neighbors, the code below does not require package cccd.
knn <- t(apply(dm,1,function(x)sort(x)[2:6]))
rownames(knn) <- rownames(df)
Here, the matrix knn has a row for each vertex and columns specifying the distance from that vertex to it's 5 nearest neighbors. It does not tell you which neighbors those are, though.
Okay, I've found a nng function in cccd package. Is that it? If so.. then mygraph is just an igraph object and you can just do E(mygraph)$whatever to get the names of the edge attributes.
Following one of the cccd examples to create G1 here, you can get a data frame of all the edges and attributes thus:
get.data.frame(G1,"E")[1:10,]
You can get/set individual edge attributes with E(g)$whatever:
> E(G1)$weight=1:250
> E(G1)$whatever=runif(250)
> get.data.frame(G1,"E")[1:10,]
from to weight whatever
1 1 3 1 0.11861240
2 1 7 2 0.06935047
3 1 22 3 0.32040316
4 1 29 4 0.86991432
5 1 31 5 0.47728632
Is that what you are after? Any igraph package tutorial will tell you more!
I have a data frame having 20 columns. I need to filter / remove noise from one column. After filtering using convolve function I get a new vector of values. Many values in the original column become NA due to filtering process. The problem is that I need the whole table (for later analysis) with only those rows where the filtered column has values but I can't bind the filtered column to original table as the number of rows for both are different. Let me illustrate using the 'age' column in 'Orange' data set in R:
> head(Orange)
Tree age circumference
1 1 118 30
2 1 484 58
3 1 664 87
4 1 1004 115
5 1 1231 120
6 1 1372 142
Convolve filter used
smooth <- function (x, D, delta){
z <- exp(-abs(-D:D/delta))
r <- convolve (x, z, type='filter')/convolve(rep(1, length(x)),z,type='filter')
r <- head(tail(r, -D), -D)
r
}
Filtering the 'age' column
age2 <- smooth(Orange$age, 5,10)
data.frame(age2)
The number of rows for age column and age2 column are 35 and 15 respectively. The original dataset has 2 more columns and I like to work with them also. Now, I only need 15 rows of each column corresponding to the 15 rows of age2 column. The filter here removed first and last ten values from age column. How can I apply the filter in a way that I get truncated dataset with all columns and filtered rows?
You would need to figure out how the variables line up. If you can add NA's to age2 and then do Orange$age2 <- age2 followed by na.omit(Orange) you should have what you want. Or, equivalently, perhaps this is what you are looking for?
df <- tail(head(Orange, -10), -10) # chop off the first and last 10 observations
df$age2 <- age2
df
Tree age circumference age2
11 2 1004 156 915.1678
12 2 1231 172 876.1048
13 2 1372 203 841.3156
14 2 1582 203 911.0914
15 3 118 30 948.2045
16 3 484 51 1008.0198
17 3 664 75 955.0961
18 3 1004 108 915.1678
19 3 1231 115 876.1048
20 3 1372 139 841.3156
21 3 1582 140 911.0914
22 4 118 32 948.2045
23 4 484 62 1008.0198
24 4 664 112 955.0961
25 4 1004 167 915.1678
Edit: If you know the first and last x observations will be removed then the following works:
x <- 2
df <- tail(head(Orange, -x), -x) # chop off the first and last x observations
df$age2 <- age2